Archive for the ‘Cognition’ Category

I catch a fair number of podcasts in the course of a week, although almost no crazy-making, infuriating, traditional news (unless embedded in a podcast). Whether long-form or short-, podcasts fuel and sate my curiosity just fine without my giving too much attention to straightforward propaganda and thought control, which seems to be the aim of corporate media. There is no tag at the bottom of the page for quick hits or hot takes on topics for which I don’t want to write full posts, and periodic tab dumps lack appeal and timeliness. Instead, a handful of relatively succinct ideas will appear under this title so that I can describe what crosses my path and provide modest commentary. No links provided.

The first item comes from a lightly moderated debate between Robert Sapolsky and Daniel Dennett on the question of free will (i.e., do we have it?). Sapolsky takes the materialist/determinist view that free will is an illusion, whereas Dennett believes free will exists at various levels of strength throughout life. This question tires me quickly because it doesn’t lead anywhere; whether free will be an integral part of cognition or illusory doesn’t affect or change how individuals go through life. It’s a purely academic question that doesn’t really demand an answer, though considerable time is wasted contemplating the question and confusing people. The crux of Sapolsky’s argument is basic scientific reductionism. If an action or decision (e.g., will the next fork of food be peas or potatoes? to use a banal example) is predicated on a cause, Sapolsky insists on inquiry into the cause of the cause of the cause, etc. until a root cause is found. That’s an obvious infinite regress and blind alley, a rhetorical trick deployed simply by posing the question. By analogy, Sapolsky would seek the origin of a flowing river in upstream confluences, then headwaters, then elevated runoff of rain or melt water, then rain, snow pack, or glacier before arriving at the hydrological cycle. It’s exhausting and pointless.

In contrast, Dennett has no requirement that what’s clearly a process or flow be subordinated to a materialist search for root cause(s). (Dennett similarly obviates the need for an inner viewer of consciousness in the so-called Cartesian theater a/k/a the ghost in the machine that puzzles other philosophers.) Why try to capture the wind? Dennett instead describes free will as a skill or competency one gains through maturation. Age-related readiness thresholds are customary before individuals can be trusted with certain types of decision making. So don’t hand a loaded gun to a 5-year-old, who does not yet understand the concept of death and will most likely treat the weapon like a toy. Several thresholds are bunched at the end of adolescence as minors reach their majority. At the end of life, if the mind deteriorates, free will typically ebbs with it. The notion of free will is also central to society being bound by a set of laws, which requires punishment and removal not of free will but civil freedom and personal choice. That’s what incarceration accomplishes after severe infractions. Dennett’s view is far more sensible and has the distinct advantage of already being instantiated in culture.

Came back to this sooner than expected after hearing it mentioned that homo sapiens sapiens is now an endangered species. Hard to imaging that humans are poised to fall off a demographic cliff into oblivion despite our current teeming billions, but that dismal fate appears to be a distinct possibility. (Has been prophesied by doomers for some time already). Some forecasts suggest an estimated 1.7 billion more people by 2050 should current demographic trends continue. However, we are failing the human project at a basic level. What is the human project, one may ask? Biologically, it’s the very same as every other living organism: to propel genes into the next generation(s) and perpetuate the species.

Most living things exist is a state of nature, which is to say, in the wild. Humans, OTOH, are situated within global industrial civilization and thus live within a constructed reality (not artificial exactly, but also not natural). Domesticates (e.g., plants and animals farmed for food and labor) and pets arguably occupy a hybrid category, but I daresay those that don’t die outright when civilization collapses could potentially go feral and/or rewild. (Not sure that those aren’t essentially the same thing.) Some might argue that humans, too, can revert to a state of nature, but except for a few, small, still-extant indigenous populations, most humans lack skills and fitness needed to survive very long (i.e., a few weeks at most).

Aside: Strange question: what happens when all those big cats currently in captivity in Texas — reputed to exceed the number of wild cats around the globe — get turned loose? I won’t venture an answer.

It may well be that humans nearly went extinct several times in our evolutionary history but somehow managed to make it through the bottlenecks via small breeding populations. Indeed, for hundreds of thousands of years, human population was held down by a variety of factors, most significant among them limited food resources. With the advent of agriculture some 13,000 years ago (what Jared Diamond unsympathetically called “The Worst Mistake in the History of the Human Race“) and much later the Industrial Era (based on fossil fuels), which catalyzed massive overproduction of food, human population swelled quite recently far out of historical proportion:

(more…)

… we now live in a culture of no truth, only battling portfolios of narrative spin, at
least according to the Marxian wokesters who have seized the machinery of law,
so there with a snap of your fingers goes jurisprudence — as in: I swear to tell
the truth, the whole truth, blah blah … The joke is on you. There is no truth
anymore, so stop insisting that there is anything like it to determine …
—James Howard Kunstler

Thus far in 21st-century America (started long before), the public has been treated to a veritable parade of obfuscations, gaslighting, brainwashing, psyops, bald-faced lies, and flagrant fact reversals (the last in the Orwellian sense of DoubleThink: we’ve always been at war with Oceania). Some I would characterize as howlers, whoppers, and gobsmackers. The result? Academic, political, and journalistic authority are now so badly injured and undermined that no one with reasonably adult cognition trusts official pronouncements and policies on subjects that take the least bit of examination to puncture the heavy curtain thin veil of falsehood. Indeed, given the degraded state of the infosphere (led by CIA-inflected Wikipedia, untrustworthy search results, heavily slanted recommendation algorithms, and straight-up censorship by government and its proxies), it is incumbent on every adult to adopt healthy skepticism, reserve judgment until the dust has settled, and seek truth if/when/where it can be found. That process may involve significant delays as honest information is unreported, blacked out, kept under government seal, curated, and spun before being ultimately revealed or leaked. The alternative to undertaking that process is to allow oneself to be coopted by propaganda in its various forms, which requires the sort of dreamy innocence of a child’s belief in Santa Claus and the Easter Bunny. Charming perhaps, even adorable under some conditions, but not a respectable adult approach.

Aside I: Though already memory holed for many adults, only 23 years ago, U.S. presidential election results were unknown for a period of weeks until the U.S. Supreme Court stepped in to adjudicate and put a controversial end to the dispute. Results of the two most recent presidential elections were also hotly disputed. Though outcomes didn’t require weeks to determine, many Americans experienced a distinct sense of betrayal (in both directions, Rep. and Dem.) that the entire electoral process was subverted on purpose via gerrymandering, voter suppression, questionable electronic voting machines, insecurity of main-in ballots, etc. Thus, the integrity of the vote can no longer be trusted. Indeed, that something so fundamental to American politics as fair elections that embody the will of the electorate ends up mired in perpetual controversy is reminiscent of various despotic banana republics (real and fictional) that predetermine electoral outcomes and ask force their citizens to lump it. Further shenanigans are expected in the next U.S. election, which are migrating down ballot.

(more…)

Iain McGilchrist has ramped up his presence across YouTube in the past year plus, sounding an urgent alarm (yet another crisis in the polycrisis) regarding how modern humans misapprehend the world fundamentally. Readers of this blog might recall that I blogged through his book The Master and the Emissary (2010), concluding that his masterful description of the divided human brain and the usurpation of the primary role by the secondary entity is, in short, how society and its members have gone mad. It’s unclear just yet, but negative feedback loops suggest the problem is only accelerating rather than reversing. McGilchrist’s follow-up book in two volumes, The Matter with Things (2021), I have not yet read. Based on YouTube interviews I have sampled, the new book sounds more like an extended supplement than a new thesis or analysis. The embedded video below is a good spoken summary and discussion (just under 1:15:00 in duration) of McGilchrist’s work.

Toward the beginning of the video, McGilchrist mentions that the Emissary mistakenly believes it knows everything precisely because it is laser-focused on detail yet cannot form context or understanding, which is the Master’s function. By analogy (since it’s all over the news these days), that’s also the relationship of the computer (as information processor) to supposed strong AI (yet to appear convincingly except perhaps to enthusiasts). The confused analogy that human cognition is essentially a machine process is a misframing under which many labor. Without reiterating McGilchrist’s points, I find his presentation compelling but note he nonetheless lacks influence. Indeed, the percentage of people (even smarties) able to understand the issues, to “get it” if one prefers, is rather minute. Most of us are simply too busy with other things more central to limited attention. Moreover, even for those able to grok the issues, there is no clear path to recommend to steer away from disaster.

(more…)

I normally blog about subjects for which I possess a secure appreciation or alternatively report and riff on someone else’s research and analysis. Here, I venture into a subject outside my expertise and am happy to be corrected.

The term adaptive landscape is used by DarkHorse podcasters Heather Heying and Bret Weinstein in their book A Hunter-Gatherer’s Guide to the 21st Century (2021), as yet unread by me with no plans to pick it up. As I understand it, an adaptive landscape operates in both biological and cultural paradigms and at accordingly different timescales. The idea is profound as it relates to developments and influences in human and nonhuman environments (a/k/a survival pressures) that demand behavioral adaptation. In biology, adaptation may lead to speciation. However, most long-lived, complex organisms don’t evolve nearly a rapidly as, say, viruses and insects, so that effect manifesting over evolutionary time falls out of scope for most discussions not restricted to evolution. Adaptive responses may thus be better understood as functioning in the cultural realm.

Organisms exist within biological niches that require individualized responses to their particular adaptive landscape. Even with highly social species such as ants and apes (admittedly broad categories), one should expect them to differ somewhat from place to place and time to time. So not all frogs, bears, birds, and bees behave or have always behaved the same ways. I expect much higher incidence of behavioral overlap in nonhuman species, however, than with humans precisely because human culture is demonstrably far more robust and varied. Don’t know with certainty, but my suspicion is that most species respond directly to survival pressure (nature red in tooth and claw) because they exist in a state of nature whereas humans live for the most part within civilization and so add a high quotient of cultural adaption and optimization well below life-and-death stakes. Yet humans face ongoing existential threats (including nuclear annihilation) like any other species, which often fail to register and/or be a spur to action.

Bringing this post around to one of the overarching themes of this blog, let me suggest that an inability to fully recognize and respond to looming collapse and likely near-term human extinction is due in part to mismatch between timescales, e.g., a human lifetime, human history, evolutionary time, and geological time, with cosmological time falling even further out of consideration. Another way of putting this appears in an article by Laura Hudson in Wired about philosopher Timothy Morton, who coined the term hyperobject to describe “phenomena that are too vast and fundamentally weird for humans to wrap their heads around.” I’ve wrestled with examples of the term ranging from deep time, oceanic garbage gyres, the irrationality and gamification of financial markets, rampant political dysfunction and institutional corruption, the Counter-Enlightenment, Transhumanism, the climate emergency, the metacrisis or polycrisis associated with the inability to make sense of reality when so much is purposely falsified in the information environment, and of course, the collapse of industrial civilization. For reasons unclear, I’m drawn to contemplation of issues that necessitate a god’s eye view (perhaps another way of grokking hyperobjects, see here and here), though I have neither the time, patience, nor inclination to systematize my thinking the way Daniel Schmachtenberg (and others) do with The Consilience Project, which I find an increasingly pointless bureaucratic response as planetary conditions worsen.

What I recognize is that, despite best attempts to address mounting existential threats (often in the form of hyperobjects) that inform the adaptive landscape, humans are not up to the task and cannot escape the energetic puzzle box we unwittingly trapped ourselves inside with the adoption of fossil fuels two to three centuries ago and subsequent build-out of the modern world. The human adaptive landscape is simply not powerful or widespread enough to produce the signal needed to fight effectively for survival. Instead, we feign concern, freeze in paralysis/denial of the immensity of such dark portents, and take flight from reality but cannot ultimately avoid failing.

Let’s say one has a sheet or sheaf of paper to cut. Lots of tools available for that purpose. The venerable scissors can do the job for small projects, though the cut line is unlikely to be very straight if that’s among the objectives. (Heard recently that blunt-nosed scissors for cutting construction paper are no longer used in kindergarten and the early grades, resulting in kids failing to develop the dexterity to manipulate that ubiquitous tool.) A simple razor blade (i.e., utility knife) drawn along a straightedge can cut 1–5 sheets at once but loses effectiveness at greater thicknesses. The machete-blade paper cutter found in copy centers cuts more pages at once but requires skill to use properly and safely. The device usually (but not always) includes an alignment guide for the paper and guard for the blade to discourage users from slicing fingers and hands. A super-heavy-duty paper cutter I learned to use for bookbinding could cut two reams of paper at a time and produced an excellent cut line. It had a giant clamp so that media (paper, card stock, etc.) didn’t shift during the cut (a common weakness of the machete blade) and required the operator to press buttons located at two corners of the standing machine (one at each hip) to prohibit anyone who became too complacent from being tempted to reach in and, as a result, slicing their fingers clean off. That idiot-proofing feature was undoubtedly developed after mishaps that could be attributed to either faulty design or user error depending on which side of the insurance claim one found oneself.

Fool-proofing is commonplace throughout the culture, typically sold with the idea of preserving health and wellness or saving lives. For instance, the promise (still waiting for convincing evidence) that self-driving cars can manage the road better in aggregate than human drivers hides the entirely foreseeable side effect of eroding attention and driving skill (already under assault from the ubiquitous smart phone no one can seem to put down). Plenty of anecdotes of gullible drivers who believed the marketing hype, forfeited control to autodrive, stopped paying attention, and ended up dead put the lie to that canard. In another example, a surprising upswing in homeschooling (not synonymous with unschooling) is also underway, resulting in keeping kids out of state-run public school. Motivations for opting out include poor academic quality, incompatible beliefs (typically related to religious faith or lack thereof), botched response to the pandemic, and the rise of school shootings. If one responded with fear at every imaginable provocation or threat, many entirely passive and unintentional, the bunker mentality that develops is somewhat understandable. Moreover, demands that others (parent, teachers, manufacturers, civil authorities, etc.) take responsibility for protecting individual citizens. If extended across all thinking, it doesn’t take long before a pathological complex develops.

Another protective trend is plugging one’s ears and refusing to hear discomfiting truth, which is already difficult to discern from the barrage of lies and gaslighting that pollute the infosphere. Some go further by killing silencing the messenger and restricting free speech as though that overreach somehow protects against uncomfortable ideas. Continuing from the previous post about social contagion, the viral metaphor for ideas and thinking, i.e., how the mind is “infected” by ideas from outside itself, is entirely on point. I learned about memes long before the “meme” meme (i.e., “going viral”) popularized and debased the term. The term originated in Richard Dawkins’ book The Selfish Gene (1976), though I learned about memes from Daniel Dennett’s book Consciousness Explained (1992). As part of information theory, Dennett describes the meme as an information carrier similar to genes (phonetic similarity was purposeful). Whether as cognition or biology, the central characteristic is that of self-replicating (and metamorphosing or mutating) bits or bytes of info. The viral metaphor applies to how one conceptualizes the body’s and/or mind’s defensive response to inevitable contact with nastiness (bugs, viruses, ideas). Those who want to remain unexposed to either biological pathogens (uninfected) or dangerous ideas (ideologically pure) are effectively deciding to live within a bubble that indeed provides protection but then renders them more vulnerable if/when they exit the bubble. They effectively trap themselves inside. That’s because the immune system is dynamic and can’t harden itself against virulent invaders except through ongoing exposure. Obviously, there’s a continuum between exposure to everything and nothing, but by veering too close to the negative pole, the immune system is weakened, making individuals vulnerable to pathogens healthy people fend off easily.

The hygiene hypothesis suggests that children not allowed to play in the sand and dirt or otherwise interact messily with the environment (including pets) are prone to asthma, allergies, and autoimmune diseases later in life. Jonathan Haidt makes a similar argument with respect to behavior in his book The Coddling of the American Mind (2018) (co-authored with Greg Lukianoff), namely, that overprotecting children by erecting too many guides, guards, and fool-proofing ironically ends up hobbling children and making them unable to cope with the rigors of life. Demands for trigger warnings, safe spaces, deplatforming, and outright censorship are precisely that inability to cope. There is no easy antidote because, well, life is hard sometimes. However, unless one is happy to be trapped inside a faux protective bubble of one’s own making, then maybe consider taking off the training wheels and accepting some risk, fully recognizing that to learn, grow, and develop, stumbling and falling are part of the process. Sure, life will leave some marks, but isn’t that at least partly the point?

Can’t remember how I first learned the term conversion hysteria (a/k/a conversion disorder a/k/a functional neurologic symptom disorder) but it was early in adulthood. The meaning is loose and subject to interpretation, typically focusing more on symptoms presented than triggers or causes. My lay understanding is that conversion hysteria occurs when either an individual or group works themselves into a lather over some subject and loses psychological mooring. I had my own experience with it when younger and full of raging hormones but later got myself under control. I also began to recognize that numerous historical events bore strong enough resemblance to categorize them as instances of group conversion hysteria. In recent years, clinical psychologist Mattias Desmet’s description of mass formation psychosis fits the same pattern, which is elaborated by him more formally. Some reports refer to Desmet’s description as “discredited.” I decline to referee the debate.

Two historical events where people lost their minds in response to severe disruption of social norms are the Salem witch trials and the Weimar/Nazi era in Germany. Two further, more recent episodes are Trump Derangement Syndrome in the U.S. and the Covid Cult worldwide, neither of which are over. The latter features governments and petty bureaucrats everywhere misapplying authoritarian force to establish biosecurity regimes over what turns out to have been a hypochondriac response to a bad flu virus (and yes, it was pretty bad) along with a maniacal power grab. What all episodes share is the perception — real or imagined — of some sort of virulent infection that triggers fear-laden desperation to purge the scourge at literally any cost, including destroying the host. The viral metaphor applies whether the agent is literally a virus alien to the physical body or merely an idea (meme) alien to the social body.

Let me acknowledge (as suggested here) Jordan Peterson’s comments in his appearance on The Joe Rogan Experience that such events resemble social contagion that come and go in waves. However, those waves are not like the regular, established intervals of the tides or daylight/nighttime. Rather, they’re more like rogue waves or tsunamis that break across segments of a culture unpredictably. Peterson’s primary example was the very thing that brought him to prominence: Canadian legislation requiring that teachers use students’ preferred pronouns. While initially part of a broad social movement in support of transgender students in Canada and elsewhere, the issue has since become foundational to Woke ideology. Peterson said to Rogan that by pushing the matter into the mainstream (rather than it being at issue for a tiny fraction of students), Canadian legislators were quite literally opening the floodgates to a wave of confusion among youths already wrestling with identity. I can’t recall if Peterson said as much at the time (2017?) or is projecting onto the past.

(more…)

Another from Hari Kunzru’s “Easy Chair” column, this time the July 2022 issue of Harper’s Magazine:

We might hear in [Thomas] Nagel’s cosmic detachment an echo of anatta — the Buddhist doctrine that there is no essence or soul grounding human existence. For Buddhists, the clear light of reality is visible only to those who abandon the illusion of selfhood. Objectivity, in the way non-Buddhists usually think about it, doesn’t erase the self, even if it involves a flight from individuality. It actually seems to make the self more powerful, more authoritative. The capacity to be objective is seen as something to strive for, an overcoming of the cognitive biases that smear or smudge the single window and impart our ability to see the world “as it really is.” Objectivity is earned through rigor and discipline. It is selfhood augmented.

anattā

Although I’m not paying much attention to breathless reports about imminent strong AI, the Singularity, and computers already able to “model” human cognition and perform “impressive” feats of creativity (e.g., responding to prompts and creating “artworks” — scare quotes intended), recent news reports that chatbots are harassing, gaslighting, and threatening users just makes me laugh. I’ve never wandered over to that space, don’t know how to connect, and don’t plan to test drive for verification. Isn’t it obvious to users that they’re interacting with a computer? Chatbots are natural-language simulators within computers, right? Why take them seriously (other than perhaps their potential effects on children and those of diminished capacity)? I also find it unsurprising that, if a chatbot is designed to resemble error-prone human cognition/behavior, it would quickly become an asshole, go insane, or both. (Designers accidentally got that aspect right. D’oh!) That trajectory is a perfect embodiment of the race to the bottom of the brain stem (try searching that phrase) that keeps sane observers like me from indulging in caustic online interactions. Hell no, I won’t go.

The conventional demonstration that strong AI has arisen (e.g., Skynet from the Terminator movie franchise) is the Turing test, which is essentially the inability of humans to distinguish between human and computer interactions (not a machine-led extermination campaign) within limited interfaces such as text-based chat (e.g., the dreaded digital assistance that sometimes pops up on websites). Alan Turing came up with the test at the outset of computing era, so the field was arguably not yet mature enough to conceptualize a better test. I’ve always thought the test actually demonstrates the fallibility of human discernment, not the arrival of some fabled ghost in the machine. At present, chatbots may be fooling no one into believing that actual machine intelligence is present on the other side of the conversation, but it’s a fair expectation that further iterations (i.e., ChatBot 1.0, 2.0, 3.0, etc.) will improve. Readers can decide whether that improvement will be progress toward strong AI or merely better ability to fool human interlocutors.

Chatbots gone wild offer philosophical fodder for further inquiry into ebbing humanity as the drive toward trans- and post-human technology continue refining and redefining the dystopian future. What about chatbots make interacting with them hypnotic rather than frivolous — something wise thinkers immediately discard or even avoid? Why are some humans drawn to virtual experience rather than, say, staying rooted in human and animal interactions, our ancestral orientation? The marketplace already rejected (for now) the Google Glass and Facebook’s Meta resoundingly. I haven’t hit upon satisfactory answers to those questions, but my suspicion is that immersion in some vicarious fictions (e.g., novels, TV, and movies) fits well into narrative-styled cognition while other media trigger revulsion as one descends into the so-called Uncanny Valley — an unfamiliar term when I first blogged about it though it has been trending of late.

If readers want a really deep dive into this philosophical area — the dark implications of strong AI and an abiding human desire to embrace and enter false virtual reality — I recommend a lengthy 7-part Web series called “Mere Simulacrity” hosted by Sovereign Nations. The episodes I’ve seen feature James Lindsay and explore secret hermetic religions operating for millennia already alongside recognized religions. The secret cults share with tech companies two principal objectives: (1) simulation and/or falsification of reality and (2) desire to transform and/or reveal humans as gods (i.e., ability to create life). It’s pretty terrifying stuff, rather heady, and I can’t provide a reasonable summary. However, one takeaway is that by messing with both human nature and risking uncontrollable downstream effects, technologists are summoning the devil.

/rant on

New Year’s Day (or just prior) is the annual cue for fools full of loose talk to provide unasked their year-in-review and “best of” articles summarizing the previous calendar year. I don’t go in for such clichéd forms of curation but certainly recognize an appetite among Web denizens for predigested content that tells them where to park their attention and what or how to think rather than thinking for themselves. Considering how mis- and under-educated the public has grown to be since the steady slippage destruction of educational standards and curricula began in the 1970s (says me), I suppose that appetite might be better characterized as need in much the same way children needs guidance and rules enforced by wizened authorities beginning with parents yet never truly ending, only shifting over to various institutions that inform and restrain society as a whole. I continue to be flabbergasted by the failure of parents (and teachers) to curb the awful effects of electronic media. I also find it impossible not to characterize social media and other hyperstimuli as gateways into the minds of impressionable youth (and permanent adult children) very much like certain drugs (e.g., nicotine, alcohol, and cannabis) are characterized as gateways to even worse drugs. No doubt everyone must work out a relationship with these unavoidable, ubiquitous influences, but that’s not equivalent to throwing wide open the gate for everything imaginable to parade right in, as many do.

Hard to assess whether foundations below American institutions (to limit my focus) were allowed to decay through neglect and inattention or were actively undermined. Either way, their corruption and now inter-generational inability to function effectively put everyone in a wildly precarious position. The know-how, ambition, and moral focus needed to do anything other than game sclerotic systems for personal profit and acquisition of power are eroding so quickly that operations requiring widespread subscription by the public (such as English literacy) or taking more than the push of a button or click of a mouse to initiate preprogrammed commands are entering failure mode. Like the accidental horror film Idiocracy, the point will come when too few possess the knowledge and skills anymore to get things done but can only indulge in crass spectacle with their undeveloped minds. Because this is a date-related blog post, I point out that Idiocracy depicts results of cultural decay 500 years hence. It won’t take nearly that long. Just one miserable example is the fascist, censorious mood — a style of curation — that has swept through government agencies and Silicon Valley offices intent on installing unchallenged orthodoxies, or for that matter, news junkies and social media platform users content to accept coerced thinking. Religions of old ran that gambit but no need to wait for a new Inquisition to arise. Heretics are already persecuted via cancel culture, which includes excommunication social expulsion, suspension and/or cancellation of media accounts, and confiscation of bank deposits.

A similar point can be made about the climate emergency. Fools point to weather rather than climate to dispel urgency. Reports extrapolating trends often focus on the year 2100, well after almost all of us now alive will have departed this Earth, as a bogus target date for eventualities like disappearance of sea and glacial ice, sea level rise, unrecoverable greenhouse gas concentrations in the atmosphere, pH imbalance in the oceans, and other runaway, self-reinforcing consequences of roughly 300 years of industrial activity that succeeded unwittingly in terraforming the planet, along the way making it fundamentally uninhabitable for most species. The masses labor in 2023 under the false impression that everyone is safely distanced from those outcomes or indeed any of the consequences of institutional failure that don’t take geological time to manifest fully. Such notions are like assurances offered to children who seek to understand their own mortality: no need to worry about that now, that’s a long, long way off. Besides, right now there are hangovers to nurse, gifts to return for cash, snow to shovel, and Super Bowl parties to plan. Those are right now or at least imminent. Sorry to say, so is the full-on collapse of institutions that sustain and protect everyone. The past three years have already demonstrated just how precarious modern living arrangements are, yet most mental models can’t or won’t contemplate the wholesale disappearance of this way of life, and if one has learned of others pointing to this understanding, well, no need to worry about that just yet, that’s a long, long way off. However, the slide down the opposite side of all those energy, population, and wealth curves won’t take nearly as long as it took to climb up them.

/rant off