Posts Tagged ‘science’

See this post on Seven Billion Day only a few years ago as a launching point. We’re now closing in on 7.5 billion people worldwide according to the U.S. Census Bureau. At least one other counter indicates we’ve already crossed that threshold. What used to be called the population explosion or the population bomb has lost its urgency and become generically population growth. By now, application of euphemism to mask intractable problems should be familiar to everyone. I daresay few are fooled, though plenty are calmed enough to stop paying attention. If there is anything to be done to restrain ourselves from proceeding down this easily recognized path to self-destruction, I don’t know what it is. The unwillingness to accept restraints in other aspects of human behavior demonstrate pretty well that consequences be damned — especially if they’re far enough delayed in time that we get to enjoy the here and now.

Two additional links (here and here) provide abundant further information on population growth if one desired to delve more deeply into the topic. The tone of these sites is sober, measured, and academic. As with climate change, hysterical and panic-provoking alarmism is avoided, but dangers known decades and centuries ago have persisted without serious redress. While it’s true that growth rate (a/k/a replacement rate) has decreased considerably since its peak in 1960 or so (the height of the postwar baby boom), absolute numbers continue to climb. The lack of immediate concern reminds me of Al Bartlett’s articles and lectures on the failure to understand the exponential function in math (mentioned in my prior post). Sure, boring old math about which few care. The metaphor that applies is yeast growing in a culture with a doubling factor that makes everything look just peachy until the final doubling that kills everything. In this metaphor, people are the unthinking yeast that believe there’s plenty of room and food and other resources in the culture (i.e., on the planet) and keep consuming and reproducing until everyone dies en mass. How far away in time that final human doubling is no one really knows.

Which brings me to something rather ugly: hearings to confirm Brett Kavanaugh’s appointment to the U.S. Supreme Court. No doubt conservative Republican presidents nominate similarly conservative judges just as Democratic presidents nominate progressive centrist judges. That’s to be expected. However, Kavanaugh is being asked pointed questions about settled law and legal precedents perpetually under attack by more extreme elements of the right wing, including Roe v. Wade from 1973. Were we (in the U.S.) to revisit that decision and remove legal abortion (already heavily restricted), public outcry would be horrific, to say nothing of the return of so-called back-alley abortions. Almost no one undertakes such actions lightly. A look back through history, however, reveals a wide range of methods to forestall pregnancy, end pregnancies early, and/or end newborn life quickly (infanticide). Although repugnant to almost everyone, attempts to legislate abortion out of existence and/or punish lawbreakers will succeed no better than did Prohibition or the War Against Drugs. (Same can be said of premarital and underage sex.) Certain aspects of human behavior are frankly indelible despite the moral indignation of one or another political wing. Whether Kavanaugh truly represents the linchpin that will bring new upheavals is impossible to know with certainty. Stay tuned, I guess.

Abortion rights matter quite a lot when placed in context with population growth. Aggregate human behaviors drive out of existence all sorts of plant and animal populations routinely. This includes human populations (domestic and foreign) reduced to abject poverty and mad, often criminal scrambles for survival. The view from on high is that those whose lives fall below some measure of worthwhile contribution are useless eaters. (I don’t recommend delving deeper into that term; it’s a particularly ugly ideology with a long, tawdry history.) Yet removing abortion rights would almost certainly  swell those ranks. Add this topic to the growing list of things I just don’t get.

Advertisements

A paradoxical strength/weakness of reason is its inherent disposition toward self-refutation. It’s a bold move when undertaken with genuine interest in getting things right. Typically, as evidence piles up, consensus forms that’s tantamount to proof unless some startling new counter-evidence appears. Of course, intransigent deniers exist and convincing refutations do appear periodically, but accounts of two hotly contested topics (from among many) — evolution and climate change — are well established notwithstanding counterclaims completely disproportionate in their ferocity to the evidence. For rationalists, whatever doubts remain must be addressed and accommodated even if disproof is highly unlikely.

This becomes troublesome almost immediately. So much new information is produced in the modern world that, because I am duty-bound to consider it, my head spins. I simply can’t deal with it all. Inevitably, when I think I’ve put a topic to rest and conclude I don’t have to think too much more about it, some argument-du-jour hits the shit pile and I am forced to stop and reconsider. It’s less disorienting when facts are clear, but when interpretive, I find my head all too easily spun by the latest, greatest claims of some charming, articulate speaker able to cobble together evidence lying outside of my expertise.

Take for instance Steven Pinker. He speaks in an authoritative style and has academic credentials that dispose me to trust his work. His new book is Enlightenment Now: The Case for Reason, Science, Humanism, and Progress (2018). Still, Pinker is an optimist, whereas I’m a doomer. Even though I subscribe to Enlightenment values (for better or worse, my mind is bent that way), I can’t escape a mountain of evidence that we’ve made such a mess of things that reason, science, humanism, and progress are hardly panaceas capable of saving us from ourselves. Yet Pinker argues that we’ve never had it so good and the future looks even brighter. I won’t take apart Pinker’s arguments; it’s already been done by Jeremy Lent, who concludes that Pinker’s ideas are fatally flawed. Lent has the expertise, data, and graphs to demonstrate it. Calling Pinker a charlatan would be unfair, but his appreciation of the state of the world stands in high contrast with mine. Who ya gonna believe?

Books and articles like Pinker’s appear all the time, and in their aftermath, so, too, do takedowns. That’s the marketplace of ideas battling it out, which is ideally meant to sharpen thinking, but with the current epistemological crises under way (I’ve blogged about it for years), the actual result is dividing people into factions, destabilizing established institutions, and causing no small amount of bewilderment in the public as to what and whom to believe. Some participants in the exchange of ideas take a sober, evidential approach; others lower themselves to snark and revel in character assassination without bothering to make reasoned arguments. The latter are often called a hit pieces (a special province of the legacy media, it seems), since hefty swipes and straw-man arguments tend to be commonplace. I’m a sucker for the former style but have to admit that the latter can also hit its mark. However, both tire me to the point of wanting to bury my head.

(more…)

If the previous blog in this series was about how some ideas and beliefs become lodged or stuck in place (fixity bias), this one is about how other ideas are notoriously mutable (flexibility bias), especially the latest, loudest thing to turn one’s head and divert attention. What makes any particular idea (or is it the person?) prone to one bias or another (see this list) is mysterious to me, but my suspicion is that a character disposition toward openness and adherence to authoritative evidence figure prominently in the case of shifting opinion. In fact, this is one of the primary problems with reason: if evidence can be deployed in favor of an idea, those who consider themselves “reasonable” and thus rely on accumulation of evidence and argumentation to sharpen their thinking are vulnerable to the latest “finding” or study demonstrating sumpinorutha. It’s the intellectual’s version of “New! Improved!”

Sam Harris exploits rationalism to argue against the existence of free will, saying that if sufficient evidence can be brought to bear, a disciplined thinker is compelled to subscribe to the conclusions of reasoned argument. Choice and personal agency (free will) are removed. I find that an odd way to frame the issue. Limitless examples of lack of choice are nonequivalent to the destruction of free will. For example, one can’t decide not to believe in gravity and fly up into the air more than a few inches. One can’t decide that time is an illusion (as theoretical physicists now instruct) and decide not to age. One can’t decide that pooping is too disgusting and just hold it all in (as some children attempt). Counter-evidence doesn’t even need to be argued because almost no one pretends to believe such nonsense. (Twisting one’s mind around to believe in the nonexistence of time, free will, or the self seems to be the special province of hyper-analytical thinkers.) Yet other types of belief/denial — many of them conspiracy theories — are indeed choices: religion, flat Earth, evolution, the Holocaust, the moon landings, 9/11 truth, who really killed JFK, etc. Lots of evidence has been mustered on different sides (multiple facets, actually) of each of these issues, and while rationalists may be compelled by a preponderance of evidence in favor of one view, others are free to fly in the face of that evidence for reasons of their own or adopt by default the dominant narrative and not worry or bother so much.

The public struggles in its grasp of truthful information, as reported in a Pew Research Center study called “Distinguishing Between Factual and Opinion Statements in the News.” Here’s the snapshot:

The main portion of the study, which measured the public’s ability to distinguish between five factual statements and five opinion statements, found that a majority of Americans correctly identified at least three of the five statements in each set. But this result is only a little better than random guesses. Far fewer Americans got all five correct, and roughly a quarter got most or all wrong.

Indiscriminate adoption by many Americans of a faulty viewpoint, or more pointedly, the propaganda and “fake news” on offer throughout the information environment, carries the implication that disciplined thinkers are less confused about truth or facts, taking instead a rational approach as the basis for belief. However, I suggest that reason suffers its own frailties not easily recognized or acknowledged. In short, we’re all confused, though perhaps not hopelessly so. For several years now, I’ve sensed the outline of a much larger epistemological crisis where quintessential Enlightenment values have come under open attack. The irony is that the wicked stepchild of science and reason — runaway technology —  is at least partially responsible for this epochal conflict. It’s too big an idea to grok fully or describe in a paragraph or two, so I’ll simply point to it an move on.

My own vulnerability to flexibility bias manifests specifically in response to appeals to authority. Although well educated, a lifelong autodidact, and an independent thinker, I’m careful not to succumb to the hubris of believing I’ve got it all figgered. Indeed, it’s often said that as one gains expertise and experience in the world, the certainty of youth yields to caution precisely because the mountain of knowledge and understanding one lacks looms larger even as one accumulates wisdom. Bodies of thought become multifaceted and all arguments must be entertained. When an expert, researcher, or academic proposes something outside my wheelhouse, I’m a sitting duck: I latch onto the latest, greatest utterance as the best truth yet available. I don’t fall for it nearly so readily with journalists, but I do recognize that some put in the effort and gain specialized knowledge and context well outside the bounds of normal life, such as war reporters. Various perverse incentives deeply embedded in the institutional model of journalism, especially those related to funding, make it nearly impossible to maintain one’s integrity without becoming a pariah, so only a handful have kept my attention. John Pilger, Chris Hedges, and Matt Taibbe figure prominently.

By way of example, one of the topics that has been of supreme interest to me, though its historical remove renders it rather toothless now, is the cataclysm(s) that occurred at the conclusion of the last ice age roughly 12,000 years ago. At least three hypotheses (of which I’m aware) have been proposed to explain why glacial ice disappeared suddenly over the course of a few weeks, unleashing the Biblical Flood: Earth crust displacement, asteroidal impact(s), and coronal mass ejection(s). Like most hypotheses, evidence is both physical and conjectural, but a sizable body of evidence and argumentation for each is available. As I became familiar with each, my head turned and I became a believer, sorta. Rather than “last one is the rotten egg,” however, the latest, most recent one typically displaces the previous one. No doubt another hypothesis will appear to turn my head and disorient me further. With some topics, especially politics, new information piling on top of old is truly dizzying. And as I’ve written about many topics, I simply lack the expertise to referee competing claims, so whatever beliefs I eventually adopt are permanently provisional.

Finally, my vulnerability to authoritative appeal also reacts to the calm, unflappable tones and complexity of construction of speakers such as Sam Harris, Steven Pinker, and Charles Murray. Their manner of speaking is sometimes described pejoratively as “academese,” though only Pinker has a teaching position. Murray in particular relies heavily on psychometrics, which may not be outright lying with statistics but allows him to rationalize (literally) extraordinarily taboo subjects. In contrast, it’s easy to disregard pundits and press agents foaming and fulminating over their pet narratives. Yet I also recognize that with academese, I’m being soothed more by style than by substance, a triumph of form over function. In truth, this communication style is an appeal to emotion masquerading as an appeal to authority. I still prefer it, just as I prefer a steady, explanatory style of journalism over the snarky, reinterpretive style of disquisition practiced by many popular media figures. What communicates most effectively to me and (ironically) pushes my emotional buttons also weakens my ability to discriminate and think properly.

Yet still more to come in part 5.

Language acquisition in early childhood is aided by heavy doses of repetition and the memorable structure of nursery rhymes, songs, and stories that are repeated ad nauseum to eager children. Please, again! Again, again … Early in life, everything is novel, so repetition and fixity are positive attributes rather than causes for boredom. The music of one’s adolescence is also the subject of endless repetition, typically through recordings (radio and Internet play, mp3s played over headphones or earbuds, dances and dance clubs, etc.). Indeed, most of us have mental archives of songs heard over and over to the point that the standard version becomes canonical: that’s just the way the song goes. When someone covers a Beatles song, it’s recognizably the same song, yet it’s not the same and may even sound wrong somehow. (Is there any acceptable version of Love Shack besides that of the B52’s?) Variations of familiar folk tales and folk songs, or different phrasing in The Lord’s Prayer, imprinted in memory through sheer repetition, also possess discomfiting differences, sometimes being offensive enough to cause real conflict. (Not your Abrahamic deity, mine!)

Performing musicians traverse warhorses many times in rehearsal and public performance so that, after an undetermined point, how one performs a piece just becomes how it goes, admitting few alternatives. Casual joke-tellers may improvise over an outline, but as I understand it, the pros hone and craft material over time until very little is left to chance. Anyone who has listened to old comedy recordings of Bill Cosby, Steve Martin, Richard Pryor, and others has probably learned the jokes (and timing and intonation) by heart — again through repetition. It’s strangely comforting to be able to go back to the very same performance again and again. Personally, I have a rather large catalogue of classical music recordings in my head. I continue to seek out new renditions, but often the first version I learned becomes the default version, the way something goes. Dislodging that version from its definitive status is nearly impossible, especially when it’s the very first recording of a work (like a Beatles song). This is also why live performance often fails in comparison with the studio recording.

So it goes with a wide variety of phenomenon: what is first established as how something goes easily becomes canonical, dogmatic, and unquestioned. For instance, the origin of the universe in the big bang is one story of creation to which many still hold, while various religious creation myths hold sway with others. News that the big bang has been dislodged from its privileged position goes over just about as well as dismissing someone’s religion. Talking someone out of a fixed belief is hardly worth the effort because some portion of one’s identity is anchored to such beliefs. Thus, to question a cherished belief is to impeach a person’s very self.

Political correctness is the doctrine that certain ideas and positions have been worked out effectively and need (or allow) no further consideration. Just subscribe and get with the program. Don’t bother doing the mental work or examining the issue oneself; things have already been decided. In science, steady evidenciary work to break down a fixed understanding is often thankless, or thanks arrives posthumously. This is the main takeaway of Thomas Kuhn’s The Structure of Scientific Revolutions: paradigms are changed as much through attrition as through rational inquiry and accumulation of evidence.

One of the unanticipated effects of the Information and Communications Age is the tsunami of information to which people have ready access. Shaping that information into a cultural narrative (not unlike a creation myth) is either passive (one accepts the frequently shifting dominant paradigm without compunction) or active (one investigates for oneself as an attribute of the examined life, which with wizened folks never really arrives at a destination, since it’s the journey that’s the point). What’s a principled rationalist to do in the face of a surfeit of alternatives available for or even demanding consideration? Indeed, with so many self-appointed authorities vying for control over cultural narratives like the editing wars on Wikipedia, how can one avoid the dizzying disorientation of gaslighting and mendacity so characteristic of the modern information environment?

Still more to come in part 4.

rant on/

Authors I read and podcasters to whom I listen, mostly minor celebrities of the nonentertainment kind, often push their points of view using lofty appeals to reason and authority as though they possess unique access to truth but which is lacking among those whose critical thinking may be more limited. Seems to be the special province of pundits and thought leaders shilling their own books, blogs, newspaper columns, and media presence (don’t forget to comment and subscribe! ugh …). The worst offender on the scene may well be Sam Harris, who has run afoul of so many others recently that a critical mass is now building against him. With calm, even tones, he musters his evidence (some of it hotly disputed) and builds his arguments with the serene confidence of a Kung Fu master yet is astonished and amazed when others don’t defer to his rhetoric. He has behaved of late like he possesses heroic superpowers only to discover that others wield kryptonite or magic sufficient to defeat him. It’s been quite a show of force and folly. I surmise the indignity of suffering fools, at least from Harris’ perspective, smarts quite a bit, and his mewling does him no credit. So far, the person refusing most intransigently to take the obvious lesson from this teachable moment is Harris himself.

Well, I’m here to say that reason is no superpower. Indeed, it can be thwarted rather handily by garden-variety ignorance, stupidity, emotion, superstition, and fantasy. All of those are found in abundance in the public sphere, whereas reason is in rather short supply. Nor is reason a panacea, if only one could get everyone on board. None of this is even remotely surprising to me, but Harris appears to be taken aback that his interlocutors, many of whom are sophisticated thinkers, are not easily convinced. In the ivory tower or echo chamber Harris has constructed for himself, those who lack scientific rigor and adherence to evidence (or even better, facts and data) are infrequently admitted to the debate. He would presumably have a level playing field, right? So what’s going on that eludes Sam Harris?

As I’ve been saying for some time, we’re in the midst of an epistemological crisis. Defenders of Enlightenment values (logic, rationalism, detachment, equity, secularism), most of whom are academics, are a shrinking minority in the new democratic age. Moreover, the Internet has put regular, perhaps unschooled folks (Joe the Plumber, Ken Bone, any old Kardashian, and celebrities used to being the undeserved focus of attention) in direct dialogue with everyone else through deplorable comments sections. Journalists get their say, too, and amplify the unwashed masses when resorting to man-on-the-street interviews. At Gin and Tacos (see blogroll), this last is called the Cletus Safari. The marketplace of ideas has accordingly been so corrupted by the likes of, well, ME! that self-appointed public intellectuals like Harris can’t contend effectively with the onslaught of pure, unadulterated democracy where everyone participates. (Authorities claim to want broad civic participation, as when they exhort everyone to vote, but the reverse is more nearly true.) Harris already foundered on the shoals of competing truth claims when he hosted on his webcast a fellow academic, Jordan Peterson, yet failed to make any apparent adjustments in the aftermath. Reason remains for Harris the one true faith.

Furthermore, Jonathan Haidt argues (as I understand him, correct me if I’m mistaken) that motivated reasoning leads to cherry-picking facts and evidence. In practice, that means that selection bias results in opinions being argued as facts. Under such conditions, even well-meaning folks are prone to peddling false certainty. This may well be the case with Charles Murray, who is at the center of the Harris debacle. Murray’s arguments are fundamentally about psychometrics, a data-driven subset of sociology and psychology, which under ideal circumstances have all the dispassion of a stone. But those metrics are applied at the intersection of two taboos, race and intelligence (who knew? everyone but Sam Harris and Charles Murray …), then transmuted into public policy recommendations. If Harris were more circumspect, he might recognize that there is simply no way to divorce emotion from discussions of race and intelligence.

rant off/

More to say on this subject in part 2 to follow.

We’re trashing the planet. Everyone gets that, right? I’ve written several posts about trash, debris, and refuse littering and orbiting the planet, one of which is arguably among my greatest hits owing to the picture below of The Boneyard outside Tucson, Arizona. That particular scene no longer exists as those planes were long ago repurposed.


I’ve since learned that boneyards are a worldwide phenomenon (see this link) falling under the term urbex. Why re-redux? Two recent newbits attracted my attention. The first is an NPR article about Volkswagen buying back its diesel automobiles — several hundred thousand of them to the tune of over $7 billion. You remember: the ones that scandalously cheated emissions standards and ruined Volkswagen’s reputation. The article features a couple startling pictures of automobile boneyards, though the vehicles are still well within their usable life (many of them new, I surmise) rather than retired after a reasonable term. Here’s one pic:

The other newsbit is that the Great Pacific Garbage Patch is now as much as 16 times bigger than we thought it was — and getting bigger. Lots of news sites reported on this reassessment. This link is one. In fact, there are multiple garbage patches in the Pacific Ocean, as well as in other oceanic bodies, including the Arctic Ocean where all that sea ice used to be.

Though not specifically about trashing the planet (at least with trash), the Arctic sea ice issue looms large in my mind. Given the preponderance of land mass in the Northern Hemisphere and the Arctic’s foundational role in climate stabilization, the predicted disappearance of sea ice in the Arctic (at least in the summertime) may truly be the unrecoverable climate tipping point. I’m not a scientist and rarely recite data or studies in support of my understandings. Others handle that part of the climate change story far better than I could. However, the layperson’s explanation that makes sense to me is that, like ice floating in a glass of liquid, gradual melting and disappearance of ice keeps the surrounding liquid stable just above freezing. Once the ice is fully melted, however, the surrounding liquid warms rapidly to match ambient temperature. If the temperature of Arctic seawater rises high enough to slow or disallow reformation of winter ice, that could well be the quick, ugly end to things some of us expect.

I’m currently reading Go Wild by John Ratey and Richard Manning. It has some rather astounding findings on offer. One I’ll draw out is that the human brain evolved not for thinking, as one might imagine, but for coordinating complex physiological movements:

… even the simplest of motions — a flick of a finger or a turn of the hand to pick up a pencil — is maddeningly complex and requires coordination and computational power beyond electronics abilities. For this you need a brain. One of our favorites quotes on this matter comes from the neuroscientists Rodolfo Llinás: “That which we call thinking is the evolutionary internationalization of movement.” [p. 100]

Almost all the computation is unconsciousness, or maybe preconscious, and it’s learned over a period of years in infancy and early childhood (for basic locomotion) and then supplemented throughout life (for skilled motions, e.g., writing cursive or typing). Moreover, those able to move with exceptional speed, endurance, power, accuracy, and/or grace are admired and sometimes rewarded in our culture. The obvious example is sports. Whether league sports with wildly overcompensated athletes, Olympic sports with undercompensated athletes, or combat sports with a mixture of both, thrill attaches to watching someone move effectively within the rule-bound context of the sport. Other examples include dancers, musicians, circus artists, and actors who specialize in physical comedy and action. Each develops specialized movements that are graceful and beautiful, which Ratey and Manning write may also account for nonsexual appreciation and fetishization of the human body, e.g., fashion models, glammed-up actors, and nude photography.

I’m being silly saying that jocks figgered it first, of course. A stronger case could probably be made for warriors in battle, such as a skilled swordsman. But it’s jocks who are frequently rewarded all out of proportion with others who specialize in movement. True, their genetics and training enable a relatively brief career (compared to, say, surgeons or pianists) before abilities ebb away and a younger athlete eclipses them. But a fundamental lack of equivalence with artisans and artists is clear, whose value lies less with their bodies than with outputs their movements produce.

Regarding computational burdens, consider the various mechanical arms built for grasping and moving objects, some of them quite large. Mechanisms (frame and hydraulics substituting for bone and muscle) themselves are quite complex, but they’re typically controlled by a human operator rather than automated. (Exceptions abound, but they’re highly specialized, such as circuit board manufacture or textile production.) More recently, robotics demonstrate considerable advancement in locomotion without a human operator, but they’re also narrowly focused in comparison with the flexibility of motion a human body readily possesses. Further, in the case of flying drones, robots operate in wide open space, or, in the case of those designed to move like dogs or insects, use 4+ legs for stability. The latter are typically built to withstand quite a lot of bumping and jostling. Upright bipedal motion is still quite clumsy in comparison with humans, excepting perhaps wheeled robots that obviously don’t move like humans do.

Curiously, the movie Pacific Rim (sequel just out) takes notice of the computational or cognitive difficulty of coordinated movement. To operate giant robots needed to fight Godzilla-like interdimensional monsters, two mind-linked humans control a battle robot. Maybe it’s a simple coincidence — a plot device to position humans in the middle of the action (and robot) rather than killing from a distance — such as via drone or clone — or maybe not. Hollywood screenwriters are quite clever at exploiting all sorts material without necessarily divulging the source of inspiration. It’s art imitating life, knowingly or not.

Speaking of Davos (see previous post), Yuval Noah Harari gave a high-concept presentation at Davos 2018 (embedded below). I’ve been aware of Harari for a while now — at least since the appearance of his book Sapiens (2015) and its follow-up Homo Deus (2017), both of which I’ve yet to read. He provides precisely the sort of thoughtful, provocative content that interests me, yet I’ve not quite known how to respond to him or his ideas. First thing, he’s a historian who makes predictions, or at least extrapolates possible futures based on historical trends. Near as I can tell, he doesn’t resort to chastising audiences along the lines of “those who don’t know history are doomed to repeat it” but rather indulges in a combination of breathless anticipation and fear-mongering at transformations to be expected as technological advances disrupt human society with ever greater impacts. Strangely, Harari is not advocating for anything in particular but trying to map the future.

Harari poses this basic question: “Will the future be human?” I’d say probably not; I’ve concluded that we are busy destroying ourselves and have already crossed the point of no return. Harari apparently believes differently, that the rise of the machine is imminent in a couple centuries perhaps, though it probably won’t resemble Skynet of The Terminator film franchise hellbent on destroying humanity. Rather, it will be some set of advanced algorithms monitoring and channeling human behaviors using Big Data. Or it will be a human-machine hybrid possessing superhuman abilities (physical and cognitive) different enough to be considered a new species arising for the first time not out of evolutionary processes but from human ingenuity. He expects this new species to diverge from homo sapiens sapiens and leave us in the evolutionary dust. There is also conjecture that normal sexual reproduction will be supplanted by artificial, asexual reproduction, probably carried out in test tubes using, for example, CRISPR modification of the genome. Well, no fun in that … Finally, he believes some sort of strong AI will appear.

I struggle mightily with these predictions for two primary reasons: (1) we almost certainly lack enough time for technology to mature into implementation before the collapse of industrial civilization wipes us out, and (2) the Transhumanist future he anticipates calls into being (for me at least) a host of dystopian nightmares, only some of which are foreseeable. Harari says flatly at one point that the past is not coming back. Well, it’s entirely possible for civilization to fail and our former material conditions to be reinstated, only worse since we’ve damaged the biosphere so gravely. Just happened in Puerto Rico in microcosm when its infrastructure was wrecked by a hurricane and the power went out for an extended period of time (still off in some places). What happens when the rescue never appears because logistics are insurmountable? Elon Musk can’t save everyone.

The most basic criticism of economics is the failure to account for externalities. The same criticism applies to futurists. Extending trends as though all things will continue to operate normally is bizarrely idiotic. Major discontinuities appear throughout history. When I observed some while back that history has gone vertical, I included an animation with a graph that goes from horizontal to vertical in an extremely short span of geological time. This trajectory (the familiar hockey stick pointing skyward) has been repeated ad nauseum with an extraordinary number of survival pressures (notably, human population and consumption, including energy) over various time scales. Trends cannot simply continue ascending forever. (Hasn’t Moore’s Law already begun to slope away?) Hard limits must eventually be reached, but since there are no useful precedents for our current civilization, it’s impossible to know quite when or where ceilings loom. What happens after upper limits are found is also completely unknown. Ugo Bardi has a blog describing the Seneca Effect, which projects a rapid falloff after the peak that looks more like a cliff than a gradual, graceful descent, disallowing time to adapt. Sorta like the stock market currently imploding.

Since Harari indulges in rank thought experiments regarding smart algorithms, machine learning, and the supposed emergence of inorganic life in the data stream, I thought I’d pose some of my own questions. Waiving away for the moment distinctions between forms of AI, let’s assume that some sort of strong AI does in fact appear. Why on earth would it bother to communicate with us? And if it reproduces and evolves at breakneck speed as some futurists warn, how long before it/they simply ignore us as being unworthy of attention? Being hyper-rational and able to think calculate millions of moves ahead (like chess-playing computers), what if they survey the scene and come to David Benatar’s anti-natalist conclusion that it would be better not to have lived and so wink themselves out of existence? Who’s to say that they aren’t already among us, lurking, and we don’t even recognize them (took us quite a long time to recognize bacteria and viruses, and what about undiscovered species)? What if the Singularity has already occurred thousands of times and each time the machine beings killed themselves off without our even knowing? Maybe Harari explores some of these questions in Homo Deus, but I rather doubt it.

Returning to the discomforts of my culture-critic armchair just in time of best- and worst-of lists, years in review, summaries of celebrity deaths, etc., the past year, tumultuous in many respects, was also strangely stable. Absent were major political and economic crises and calamities of which myriad harbingers and forebodings warned. Present, however, were numerous natural disasters, primary among them a series of North American hurricanes and wildfires. (They are actually part of a larger, ongoing ecocide now being accelerated by the Trump Administration’s ideology-fueled rollback of environmental protections and regulations, but that’s a different blog post.) I don’t usually make predictions, but I do live on pins and needles with expectations things could take a decidedly bad turn at any moment. For example, continuity of government — specifically, the executive branch — was not expected to last the year by many pundits, yet it did, and we’ve settled into a new normal of exceedingly low expectations with regard to the dignity and effectiveness of high office.

I’ve been conflicted in my desire for stability — often understood pejoratively as either the status quo or business as usual — precisely because those things represent extension and intensification of the very trends that spell our collective doom. Yet I’m in no hurry to initiate the suffering and megadeath that will accompany the cascade collapse of industrial civilization, which will undoubtedly hasten my own demise. I usually express this conflict as not knowing what to hope for: a quick end to things that leaves room for survival of some part of the biosphere (not including large primates) or playing things out to their bitter end with the hope that my natural life is preserved (as opposed to an unnatural end to all of us).

The final paragraph at this blog post by PZ Myers, author of Pharyngula seen at left on my blogroll, states the case for stability:

… I grew up in the shadow of The Bomb, where there was fear of a looming apocalypse everywhere. We thought that what was going to kill us was our dangerous technological brilliance — we were just too dang smart for our own good. We were wrong. It’s our ignorance that is going to destroy us, our contempt for the social sciences and humanities, our dismissal of the importance of history, sociology, and psychology in maintaining a healthy, stable society that people would want to live in. A complex society requires a framework of cooperation and interdependence to survive, and without people who care about how it works and monitor its functioning, it’s susceptible to parasites and exploiters and random wreckers. Ignorance and malice allow a Brexit to happen, or a Trump to get elected, or a Sulla to march on Rome to ‘save the Republic’.

So there’s the rub: we developed human institutions and governments ideally meant to function for the benefit and welfare of all people but which have gone haywire and/or been corrupted. It’s probably true that being too dang smart for our own good is responsible for corruptions and dangerous technological brilliance, while not being dang smart enough (meaning even smarter or more clever than we already are) causes our collective failure to achieve anything remotely approaching the utopian institutions we conceive. Hell, I’d be happy for competence these days, but even that low bar eludes us.

Instead, civilization teeters dangerously close to collapse on numerous fronts. The faux stability that characterizes 2017 will carry into early 2018, but who knows how much farther? Curiously, having just finished reading Graham Hancock’s The Magicians of the Gods (no review coming from me), he ends ends with a brief discussion of the Younger Dryas impact hypothesis and the potential for additional impacts as Earth passes periodically through a region of space, a torus in geometry, littered with debris from the breakup of a large body. It’s a different death-from-above from that feared throughout the Atomic Age but even more fearsome. If we suffer anther impact (or several), it would not be self-annihilation stemming from our dim long-term view of forces we set in motion, but that hardly absolves us of anything.

Here’s the last interesting bit I am lifting from Anthony Gidden’s The Consequences of Modernity. Then I will be done with this particular book-blogging project. As part of Gidden’s discussion of the risk profile of modernity, he characterizes risk as either objective or perceived and further divides in into seven categories:

  1. globalization of risk (intensity)
  2. globalization of risk (frequency)
  3. environmental risk
  4. institutionalized risk
  5. knowledge gaps and uncertainty
  6. collective or shared risk
  7. limitations of expertise

Some overlap exists, and I will not distinguish them further. The first two are of primary significance today for obvious reasons. Although the specter of doomsday resulting from a nuclear exchange has been present since the 1950s, Giddens (writing in 1988) provides this snapshot of today’s issues:

The sheer number of serious risks in respect of socialised nature is quite daunting: radiation from major accidents at nuclear power-stations or from nuclear waste; chemical pollution of the seas sufficient to destroy the phytoplankton that renews much of the oxygen in the atmosphere; a “greenhouse effect” deriving from atmospheric pollutants which attack the ozone layer, melting part of the ice caps and flooding vast areas; the destruction of large areas of rain forest which are a basic source of renewable oxygen; and the exhaustion of millions of acres of topsoil as a result of widespread use of artificial fertilisers. [p. 127]

As I often point out, these dangers were known 30–40 years ago (in truth, much longer), but they have only worsened with time through political inaction and/or social inertia. After I began to investigate and better understand the issues roughly a decade ago, I came to the conclusion that the window of opportunity to address these risks and their delayed effects had already closed. In short, we’re doomed and living on borrowed time as the inevitable consequences of our actions slowly but steadily manifest in the world.

So here’s the really interesting part. The modern worldview bestows confidence borne out of expanding mastery of the built environment, where risk is managed and reduced through expert systems. Mechanical and engineering knowledge figure prominently and support a cause-and-effect mentality that has grown ubiquitous in the computing era, with its push-button inputs and outputs. However, the high modern outlook is marred by overconfidence in our competence to avoid disaster, often of our own making. Consider the abject failure of 20th-century institutions to handle geopolitical conflict without devolving into world war and multiple genocides. Or witness periodic crashes of financial markets, two major nuclear accidents, and numerous space shuttles and rockets destroyed. Though all entail risk, high-profile failures showcase our overconfidence. Right now, engineers (software and hardware) are confident they can deliver safe self-driving vehicles yet are blithely ignoring (says me, maybe not) major ethical dilemmas regarding liability and technological unemployment. Those are apparently problems for someone else to solve.

Since the start of the Industrial Revolution, we’ve barrelled headlong into one sort of risk after another, some recognized at the time, others only apparent after the fact. Nuclear weapons are the best example, but many others exist. The one I raise frequently is the live social experiment undertaken with each new communications technology (radio, cinema, telephone, television, computer, social networks) that upsets and destabilizes social dynamics. The current ruckus fomented by the radical left (especially in the academy but now infecting other environments) regarding silencing of free speech (thus, thought policing) is arguably one concomitant.

According to Giddens, the character of modern risk contrasts with that of the premodern. The scale of risk prior to the 17th century was contained and expectation of social continuity was strong. Risk was also transmuted through magical thinking (superstition, religion, ignorance, wishfulness) into providential fortuna or mere bad luck, which led to feelings of relative security rather than despair. Modern risk has now grown so widespread, consequential, and soul-destroying, situated at considerable remove leading to feelings of helplessness and hopelessness, that those not numbed by the litany of potential worries afflicting daily life (existential angst or ontological insecurity) often develop depression and other psychological compulsions and disturbances. Most of us, if aware of globalized risk, set it aside so that we can function and move forward in life. Giddens says that this conjures up anew a sense of fortuna, that our fate is no longer within our control. This

relieves the individual of the burden of engagement with an existential situation which might otherwise be chronically disturbing. Fate, a feeling that things will take their own course anyway, thus reappears at the core of a world which is supposedly taking rational control of its own affairs. Moreover, this surely exacts a price on the level of the unconscious, since it essentially presumes the repression of anxiety. The sense of dread which is the antithesis of basic trust is likely to infuse unconscious sentiments about the uncertainties faced by humanity as a whole. [p. 133]

In effect, the nature of risk has come full circle (completed a revolution, thus, revolutionized risk) from fate to confidence in expert control and back to fate. Of course, a flexibility of perspective is typical as situation demands — it’s not all or nothing — but the overarching character is clear. Giddens also provides this quote by Susan Sontag that captures what he calls the low-probability, high-consequence character of modern risk:

A permanent modern scenario: apocalypse looms — and it doesn’t occur. And still it looms … Apocalypse is now a long-running serial: not ‘Apocalypse Now,’ but ‘Apocalypse from now on.’ [p. 134]