Archive for the ‘Science’ Category

Language acquisition in early childhood is aided by heavy doses of repetition and the memorable structure of nursery rhymes, songs, and stories that are repeated ad nauseum to eager children. Please, again! Again, again … Early in life, everything is novel, so repetition and fixity are positive attributes rather than causes for boredom. The music of one’s adolescence is also the subject of endless repetition, typically through recordings (radio and Internet play, mp3s played over headphones or earbuds, dances and dance clubs, etc.). Indeed, most of us have mental archives of songs heard over and over to the point that the standard version becomes canonical: that’s just the way the song goes. When someone covers a Beatles song, it’s recognizably the same song, yet it’s not the same and may even sound wrong somehow. (Is there any acceptable version of Love Shack besides that of the B52’s?) Variations of familiar folk tales and folk songs, or different phrasing in The Lord’s Prayer, imprinted in memory through sheer repetition, also possess discomfiting differences, sometimes being offensive enough to cause real conflict. (Not your Abrahamic deity, mine!)

Performing musicians traverse warhorses many times in rehearsal and public performance so that, after an undetermined point, how one performs a piece just becomes how it goes, admitting few alternatives. Casual joke-tellers may improvise over an outline, but as I understand it, the pros hone and craft material over time until very little is left to chance. Anyone who has listened to old comedy recordings of Bill Cosby, Steve Martin, Richard Pryor, and others has probably learned the jokes (and timing and intonation) by heart — again through repetition. It’s strangely comforting to be able to go back to the very same performance again and again. Personally, I have a rather large catalogue of classical music recordings in my head. I continue to seek out new renditions, but often the first version I learned becomes the default version, the way something goes. Dislodging that version from its definitive status is nearly impossible, especially when it’s the very first recording of a work (like a Beatles song). This is also why live performance often fails in comparison with the studio recording.

So it goes with a wide variety of phenomenon: what is first established as how something goes easily becomes canonical, dogmatic, and unquestioned. For instance, the origin of the universe in the big bang is one story of creation to which many still hold, while various religious creation myths hold sway with others. News that the big bang has been dislodged from its privileged position goes over just about as well as dismissing someone’s religion. Talking someone out of a fixed belief is hardly worth the effort because some portion of one’s identity is anchored to such beliefs. Thus, to question a cherished belief is to impeach a person’s very self.

Political correctness is the doctrine that certain ideas and positions have been worked out effectively and need (or allow) no further consideration. Just subscribe and get with the program. Don’t bother doing the mental work or examining the issue oneself; things have already been decided. In science, steady evidenciary work to break down a fixed understanding is often thankless, or thanks arrives posthumously. This is the main takeaway of Thomas Kuhn’s The Structure of Scientific Revolutions: paradigms are changed as much through attrition as through rational inquiry and accumulation of evidence.

One of the unanticipated effects of the Information and Communications Age is the tsunami of information to which people have ready access. Shaping that information into a cultural narrative (not unlike a creation myth) is either passive (one accepts the frequently shifting dominant paradigm without compunction) or active (one investigates for oneself as an attribute of the examined life, which with wizened folks never really arrives at a destination, since it’s the journey that’s the point). What’s a principled rationalist to do in the face of a surfeit of alternatives available for or even demanding consideration? Indeed, with so many self-appointed authorities vying for control over cultural narratives like the editing wars on Wikipedia, how can one avoid the dizzying disorientation of gaslighting and mendacity so characteristic of the modern information environment?

Still more to come in part 4.

Advertisements

Continuing from part 1, which is altogether too much screed and frustration with Sam Harris, I now point to several analyses that support my contentions. First is an article in The Nation about the return of so-called scientific racism and speaks directly about Charles Murray, Sam Harris, and Andrew Sullivan, all of whom are embroiled in the issue. Second is an article in The Baffler about constructing arguments ex post facto to conform to conclusions motivated in advance of evidence. Most of us are familiar with the the constructed explanation, where in the aftermath of an event, pundits, press agents, and political insiders propose various explanatory narratives to gain control over what will eventually become the conventional understanding. Published reports such as the Warren Commission‘s report on the assassination of JFK is one such example, and I daresay few now believe the report and the consensus that it presents weren’t politically motivated and highly flawed. Both linked articles above are written by Edward Burmilla, who blogs at Gin and Tacos (see blogroll). Together, they paint a dismal picture of how reason and rhetoric can be corrupted despite the sheen of scientific respectability.

Third is an even more damaging article (actually a review of the new anthology Trump and the Media) in the Los Angeles Review of Books by Nicolas Carr asking the pointed question “Can Journalism Be Saved?” Admittedly, journalism is not equivalent with reason or rationalism, but it is among several professions that employ claims of objectivity, accuracy, and authority. Thus, journalism demands both attention and respect far in excess of the typical blogger (such as me) or watering-hole denizen perched atop a barstool. Consider this pullquote:

… the flaws in computational journalism can be remedied through a more open and honest accounting of its assumptions and limitations. C. W. Anderson, of the University of Leeds, takes a darker view. To much of the public, he argues, the pursuit of “data-driven objectivity” will always be suspect, not because of its methodological limits but because of its egghead aesthetics. Numbers and charts, he notes, have been elements of journalism for a long time, and they have always been “pitched to a more policy-focused audience.” With its ties to social science, computational journalism inevitably carries an air of ivory-tower elitism, making it anathema to those of a populist bent.

Computational journalism is contrasted with other varieties of journalism based on, say, personality, emotionalism, advocacy, or simply a mad rush to print (or pixels) to scoop the competition. This hyperrational approach has already revealed its failings, as Carr reports in his review.

What I’m driving at is that, despite frequent appeals to reason, authority, and accuracy (especially the quantitative sort), certain categories of argumentation fail to register on the average consumer of news and information. It’s not a question of whether arguments are right or wrong, precisely; it’s about what appeals most to those paying even a modest bit of attention. And the primary appeal for most (I judge) isn’t reason. Indeed, reason is swept aside handily when a better, um, reason for believing something appears. If one has done the difficult work of acquiring critical thinking and reasoning skills, it can be quite the wake-up call when others fail to behave according to reason, such as with acting against enlightened self-interest. The last presidential election was a case in point.

Circling back so something from an earlier blog, much of human cognition is based on mere sufficiency: whatever is good enough in the moment gets nominated then promoted to belief and/or action. Fight, flight, or freeze is one example. Considered evaluation and reason are not even factors. Snap judgments, gut feelings, emotional resonances, vibes, heuristics, and Gestalts dominate momentary decision-making, and in the absence of convincing countervailing information (if indeed one is even vulnerable to reason, which would be an unreasonable assumption), action is reinforced and suffices as belief.

Yet more in part 3 to come.

We’re trashing the planet. Everyone gets that, right? I’ve written several posts about trash, debris, and refuse littering and orbiting the planet, one of which is arguably among my greatest hits owing to the picture below of The Boneyard outside Tucson, Arizona. That particular scene no longer exists as those planes were long ago repurposed.


I’ve since learned that boneyards are a worldwide phenomenon (see this link) falling under the term urbex. Why re-redux? Two recent newbits attracted my attention. The first is an NPR article about Volkswagen buying back its diesel automobiles — several hundred thousand of them to the tune of over $7 billion. You remember: the ones that scandalously cheated emissions standards and ruined Volkswagen’s reputation. The article features a couple startling pictures of automobile boneyards, though the vehicles are still well within their usable life (many of them new, I surmise) rather than retired after a reasonable term. Here’s one pic:

The other newsbit is that the Great Pacific Garbage Patch is now as much as 16 times bigger than we thought it was — and getting bigger. Lots of news sites reported on this reassessment. This link is one. In fact, there are multiple garbage patches in the Pacific Ocean, as well as in other oceanic bodies, including the Arctic Ocean where all that sea ice used to be.

Though not specifically about trashing the planet (at least with trash), the Arctic sea ice issue looms large in my mind. Given the preponderance of land mass in the Northern Hemisphere and the Arctic’s foundational role in climate stabilization, the predicted disappearance of sea ice in the Arctic (at least in the summertime) may truly be the unrecoverable climate tipping point. I’m not a scientist and rarely recite data or studies in support of my understandings. Others handle that part of the climate change story far better than I could. However, the layperson’s explanation that makes sense to me is that, like ice floating in a glass of liquid, gradual melting and disappearance of ice keeps the surrounding liquid stable just above freezing. Once the ice is fully melted, however, the surrounding liquid warms rapidly to match ambient temperature. If the temperature of Arctic seawater rises high enough to slow or disallow reformation of winter ice, that could well be the quick, ugly end to things some of us expect.

I’m currently reading Go Wild by John Ratey and Richard Manning. It has some rather astounding findings on offer. One I’ll draw out is that the human brain evolved not for thinking, as one might imagine, but for coordinating complex physiological movements:

… even the simplest of motions — a flick of a finger or a turn of the hand to pick up a pencil — is maddeningly complex and requires coordination and computational power beyond electronics abilities. For this you need a brain. One of our favorites quotes on this matter comes from the neuroscientists Rodolfo Llinás: “That which we call thinking is the evolutionary internationalization of movement.” [p. 100]

Almost all the computation is unconsciousness, or maybe preconscious, and it’s learned over a period of years in infancy and early childhood (for basic locomotion) and then supplemented throughout life (for skilled motions, e.g., writing cursive or typing). Moreover, those able to move with exceptional speed, endurance, power, accuracy, and/or grace are admired and sometimes rewarded in our culture. The obvious example is sports. Whether league sports with wildly overcompensated athletes, Olympic sports with undercompensated athletes, or combat sports with a mixture of both, thrill attaches to watching someone move effectively within the rule-bound context of the sport. Other examples include dancers, musicians, circus artists, and actors who specialize in physical comedy and action. Each develops specialized movements that are graceful and beautiful, which Ratey and Manning write may also account for nonsexual appreciation and fetishization of the human body, e.g., fashion models, glammed-up actors, and nude photography.

I’m being silly saying that jocks figgered it first, of course. A stronger case could probably be made for warriors in battle, such as a skilled swordsman. But it’s jocks who are frequently rewarded all out of proportion with others who specialize in movement. True, their genetics and training enable a relatively brief career (compared to, say, surgeons or pianists) before abilities ebb away and a younger athlete eclipses them. But a fundamental lack of equivalence with artisans and artists is clear, whose value lies less with their bodies than with outputs their movements produce.

Regarding computational burdens, consider the various mechanical arms built for grasping and moving objects, some of them quite large. Mechanisms (frame and hydraulics substituting for bone and muscle) themselves are quite complex, but they’re typically controlled by a human operator rather than automated. (Exceptions abound, but they’re highly specialized, such as circuit board manufacture or textile production.) More recently, robotics demonstrate considerable advancement in locomotion without a human operator, but they’re also narrowly focused in comparison with the flexibility of motion a human body readily possesses. Further, in the case of flying drones, robots operate in wide open space, or, in the case of those designed to move like dogs or insects, use 4+ legs for stability. The latter are typically built to withstand quite a lot of bumping and jostling. Upright bipedal motion is still quite clumsy in comparison with humans, excepting perhaps wheeled robots that obviously don’t move like humans do.

Curiously, the movie Pacific Rim (sequel just out) takes notice of the computational or cognitive difficulty of coordinated movement. To operate giant robots needed to fight Godzilla-like interdimensional monsters, two mind-linked humans control a battle robot. Maybe it’s a simple coincidence — a plot device to position humans in the middle of the action (and robot) rather than killing from a distance — such as via drone or clone — or maybe not. Hollywood screenwriters are quite clever at exploiting all sorts material without necessarily divulging the source of inspiration. It’s art imitating life, knowingly or not.

Long again this time and a bit contentious. Sorry for trying your patience.

Having watched a few hundred Joe Rogan webcasts by now (previous blog on this topic here), I am pretty well acquainted with guests and ideas that cycle through periodically. This is not a criticism as I’m aware I recycle my own ideas here, which is more nearly thematic than simply repetitive. Among all the MMA folks and comedians, Rogan features people — mostly academics — who might be called thought leaders. A group of them has even been dubbed the “intellectual dark web.” I dunno who coined the phrase or established its membership, but the names might include, in no particular order, Jordan Peterson, Bret Weinstein, Eric Weinstein, Douglas Murray, Sam Harris, Jonathan Haidt, Gad Saad, Camille Paglia, Dave Ruben, Christina Hoff Sommers, and Lawrence Krauss. I doubt any of them would have been considered cool kids in high school, and it’s unclear whether they’re any cooler now that they’ve all achieved some level of Internet fame on top of other public exposure. Only a couple seem especially concerned with being thought cool now (names withheld), though the chase for clicks, views, likes, and Patreon support is fairly upfront. That they can usually sit down and have meaningful conversations without rancor (admirably facilitated by Joe Rogan up until one of his own oxen is gored, less admirably by Dave Ruben) about free speech, Postmodernism, social justice warriors, politics, or the latest meme means that the cliquishness of high school has relaxed considerably.

I’m pleased (I guess) that today’s public intellectuals have found an online medium to develop. Lots of imitators are out there putting up their own YouTube channels to proselytize their own opinions. However, I still prefer to get deeper understanding from books (and to a lesser degree, blogs and articles online), which are far better at delivering thoughtful analysis. The conversational style of the webcast is relentlessly up-to-date and entertaining enough but relies too heavily on charisma. And besides, so many of these folks are such fast talkers, often talking over each other to win imaginary debate points or just dominate the conversational space, that they frustrate and bewilder more than they communicate or convince.

(more…)

Oddly, there is no really good antonym for perfectionism. Suggestions include sloppiness, carelessness, and disregard. I’ve settled on approximation, which carries far less moral weight. I raise the contrast between perfectionism and approximation because a recent study published in Psychological Bulletin entitled “Perfectionism Is Increasing Over Time: A Meta-Analysis of Birth Cohort Differences From 1989 to 2016″ makes an interesting observation. Here’s the abstract:

From the 1980s onward, neoliberal governance in the United States, Canada, and the United Kingdom has emphasized competitive individualism and people have seemingly responded, in kind, by agitating to perfect themselves and their lifestyles. In this study, the authors examine whether cultural changes have coincided with an increase in multidimensional perfectionism in college students over the last 27 years. Their analyses are based on 164 samples and 41,641 American, Canadian, and British college students, who completed the Multidimensional Perfectionism Scale (Hewitt & Flett, 1991) between 1989 and 2016 (70.92% female, Mage = 20.66). Cross-temporal meta-analysis revealed that levels of self-oriented perfectionism, socially prescribed perfectionism, and other-oriented perfectionism have linearly increased. These trends remained when controlling for gender and between-country differences in perfectionism scores. Overall, in order of magnitude of the observed increase, the findings indicate that recent generations of young people perceive that others are more demanding of them, are more demanding of others, and are more demanding of themselves.

The notion of perfection, perfectness, perfectibility, etc. has a long tortured history in philosophy, religion, ethics, and other domains I won’t even begin to unpack. From the perspective of the above study, let’s just say that the upswing in perfectionism is about striving to achieve success, however one assesses it (education, career, relationships, lifestyle, ethics, athletics, aesthetics, etc.). The study narrows its subject group to college students (at the outset of adult life) between 1989 and 2016 and characterizes the social milieu as neoliberal, hyper-competitive, meritocratic, and pressured to succeed in a dog-eat-dog environment. How far back into childhood results of the study (agitation) extend is a good question. If the trope about parents obsessing and competing over preschool admission is accurate (may be just a NYC thang), then it goes all the way back to toddlers. So much for (lost) innocence purchased and perpetuated through late 20th- and early 21st-century affluence. I suspect college students are responding to awareness of two novel circumstances: (1) likelihood they will never achieve levels of success comparable to their own parents, especially financial (a major reversal of historical trends) and (2) recognition that to best enjoy the fruits of life, a quiet, reflective, anonymous, ethical, average life is now quite insufficient. Regarding the second of these, we are inundated by media showing rich celebrities (no longer just glamorous actors/entertainers) balling out of control, and onlookers are enjoined to “keep up.” The putative model is out there, unattainable for most but often awarded by randomness, undercutting the whole enterprise of trying to achieve perfection.

(more…)

Speaking of Davos (see previous post), Yuval Noah Harari gave a high-concept presentation at Davos 2018 (embedded below). I’ve been aware of Harari for a while now — at least since the appearance of his book Sapiens (2015) and its follow-up Homo Deus (2017), both of which I’ve yet to read. He provides precisely the sort of thoughtful, provocative content that interests me, yet I’ve not quite known how to respond to him or his ideas. First thing, he’s a historian who makes predictions, or at least extrapolates possible futures based on historical trends. Near as I can tell, he doesn’t resort to chastising audiences along the lines of “those who don’t know history are doomed to repeat it” but rather indulges in a combination of breathless anticipation and fear-mongering at transformations to be expected as technological advances disrupt human society with ever greater impacts. Strangely, Harari is not advocating for anything in particular but trying to map the future.

Harari poses this basic question: “Will the future be human?” I’d say probably not; I’ve concluded that we are busy destroying ourselves and have already crossed the point of no return. Harari apparently believes differently, that the rise of the machine is imminent in a couple centuries perhaps, though it probably won’t resemble Skynet of The Terminator film franchise hellbent on destroying humanity. Rather, it will be some set of advanced algorithms monitoring and channeling human behaviors using Big Data. Or it will be a human-machine hybrid possessing superhuman abilities (physical and cognitive) different enough to be considered a new species arising for the first time not out of evolutionary processes but from human ingenuity. He expects this new species to diverge from homo sapiens sapiens and leave us in the evolutionary dust. There is also conjecture that normal sexual reproduction will be supplanted by artificial, asexual reproduction, probably carried out in test tubes using, for example, CRISPR modification of the genome. Well, no fun in that … Finally, he believes some sort of strong AI will appear.

I struggle mightily with these predictions for two primary reasons: (1) we almost certainly lack enough time for technology to mature into implementation before the collapse of industrial civilization wipes us out, and (2) the Transhumanist future he anticipates calls into being (for me at least) a host of dystopian nightmares, only some of which are foreseeable. Harari says flatly at one point that the past is not coming back. Well, it’s entirely possible for civilization to fail and our former material conditions to be reinstated, only worse since we’ve damaged the biosphere so gravely. Just happened in Puerto Rico in microcosm when its infrastructure was wrecked by a hurricane and the power went out for an extended period of time (still off in some places). What happens when the rescue never appears because logistics are insurmountable? Elon Musk can’t save everyone.

The most basic criticism of economics is the failure to account for externalities. The same criticism applies to futurists. Extending trends as though all things will continue to operate normally is bizarrely idiotic. Major discontinuities appear throughout history. When I observed some while back that history has gone vertical, I included an animation with a graph that goes from horizontal to vertical in an extremely short span of geological time. This trajectory (the familiar hockey stick pointing skyward) has been repeated ad nauseum with an extraordinary number of survival pressures (notably, human population and consumption, including energy) over various time scales. Trends cannot simply continue ascending forever. (Hasn’t Moore’s Law already begun to slope away?) Hard limits must eventually be reached, but since there are no useful precedents for our current civilization, it’s impossible to know quite when or where ceilings loom. What happens after upper limits are found is also completely unknown. Ugo Bardi has a blog describing the Seneca Effect, which projects a rapid falloff after the peak that looks more like a cliff than a gradual, graceful descent, disallowing time to adapt. Sorta like the stock market currently imploding.

Since Harari indulges in rank thought experiments regarding smart algorithms, machine learning, and the supposed emergence of inorganic life in the data stream, I thought I’d pose some of my own questions. Waiving away for the moment distinctions between forms of AI, let’s assume that some sort of strong AI does in fact appear. Why on earth would it bother to communicate with us? And if it reproduces and evolves at breakneck speed as some futurists warn, how long before it/they simply ignore us as being unworthy of attention? Being hyper-rational and able to think calculate millions of moves ahead (like chess-playing computers), what if they survey the scene and come to David Benatar’s anti-natalist conclusion that it would be better not to have lived and so wink themselves out of existence? Who’s to say that they aren’t already among us, lurking, and we don’t even recognize them (took us quite a long time to recognize bacteria and viruses, and what about undiscovered species)? What if the Singularity has already occurred thousands of times and each time the machine beings killed themselves off without our even knowing? Maybe Harari explores some of these questions in Homo Deus, but I rather doubt it.

Returning to the discomforts of my culture-critic armchair just in time of best- and worst-of lists, years in review, summaries of celebrity deaths, etc., the past year, tumultuous in many respects, was also strangely stable. Absent were major political and economic crises and calamities of which myriad harbingers and forebodings warned. Present, however, were numerous natural disasters, primary among them a series of North American hurricanes and wildfires. (They are actually part of a larger, ongoing ecocide now being accelerated by the Trump Administration’s ideology-fueled rollback of environmental protections and regulations, but that’s a different blog post.) I don’t usually make predictions, but I do live on pins and needles with expectations things could take a decidedly bad turn at any moment. For example, continuity of government — specifically, the executive branch — was not expected to last the year by many pundits, yet it did, and we’ve settled into a new normal of exceedingly low expectations with regard to the dignity and effectiveness of high office.

I’ve been conflicted in my desire for stability — often understood pejoratively as either the status quo or business as usual — precisely because those things represent extension and intensification of the very trends that spell our collective doom. Yet I’m in no hurry to initiate the suffering and megadeath that will accompany the cascade collapse of industrial civilization, which will undoubtedly hasten my own demise. I usually express this conflict as not knowing what to hope for: a quick end to things that leaves room for survival of some part of the biosphere (not including large primates) or playing things out to their bitter end with the hope that my natural life is preserved (as opposed to an unnatural end to all of us).

The final paragraph at this blog post by PZ Myers, author of Pharyngula seen at left on my blogroll, states the case for stability:

… I grew up in the shadow of The Bomb, where there was fear of a looming apocalypse everywhere. We thought that what was going to kill us was our dangerous technological brilliance — we were just too dang smart for our own good. We were wrong. It’s our ignorance that is going to destroy us, our contempt for the social sciences and humanities, our dismissal of the importance of history, sociology, and psychology in maintaining a healthy, stable society that people would want to live in. A complex society requires a framework of cooperation and interdependence to survive, and without people who care about how it works and monitor its functioning, it’s susceptible to parasites and exploiters and random wreckers. Ignorance and malice allow a Brexit to happen, or a Trump to get elected, or a Sulla to march on Rome to ‘save the Republic’.

So there’s the rub: we developed human institutions and governments ideally meant to function for the benefit and welfare of all people but which have gone haywire and/or been corrupted. It’s probably true that being too dang smart for our own good is responsible for corruptions and dangerous technological brilliance, while not being dang smart enough (meaning even smarter or more clever than we already are) causes our collective failure to achieve anything remotely approaching the utopian institutions we conceive. Hell, I’d be happy for competence these days, but even that low bar eludes us.

Instead, civilization teeters dangerously close to collapse on numerous fronts. The faux stability that characterizes 2017 will carry into early 2018, but who knows how much farther? Curiously, having just finished reading Graham Hancock’s The Magicians of the Gods (no review coming from me), he ends ends with a brief discussion of the Younger Dryas impact hypothesis and the potential for additional impacts as Earth passes periodically through a region of space, a torus in geometry, littered with debris from the breakup of a large body. It’s a different death-from-above from that feared throughout the Atomic Age but even more fearsome. If we suffer anther impact (or several), it would not be self-annihilation stemming from our dim long-term view of forces we set in motion, but that hardly absolves us of anything.

Here’s the last interesting bit I am lifting from Anthony Gidden’s The Consequences of Modernity. Then I will be done with this particular book-blogging project. As part of Gidden’s discussion of the risk profile of modernity, he characterizes risk as either objective or perceived and further divides in into seven categories:

  1. globalization of risk (intensity)
  2. globalization of risk (frequency)
  3. environmental risk
  4. institutionalized risk
  5. knowledge gaps and uncertainty
  6. collective or shared risk
  7. limitations of expertise

Some overlap exists, and I will not distinguish them further. The first two are of primary significance today for obvious reasons. Although the specter of doomsday resulting from a nuclear exchange has been present since the 1950s, Giddens (writing in 1988) provides this snapshot of today’s issues:

The sheer number of serious risks in respect of socialised nature is quite daunting: radiation from major accidents at nuclear power-stations or from nuclear waste; chemical pollution of the seas sufficient to destroy the phytoplankton that renews much of the oxygen in the atmosphere; a “greenhouse effect” deriving from atmospheric pollutants which attack the ozone layer, melting part of the ice caps and flooding vast areas; the destruction of large areas of rain forest which are a basic source of renewable oxygen; and the exhaustion of millions of acres of topsoil as a result of widespread use of artificial fertilisers. [p. 127]

As I often point out, these dangers were known 30–40 years ago (in truth, much longer), but they have only worsened with time through political inaction and/or social inertia. After I began to investigate and better understand the issues roughly a decade ago, I came to the conclusion that the window of opportunity to address these risks and their delayed effects had already closed. In short, we’re doomed and living on borrowed time as the inevitable consequences of our actions slowly but steadily manifest in the world.

So here’s the really interesting part. The modern worldview bestows confidence borne out of expanding mastery of the built environment, where risk is managed and reduced through expert systems. Mechanical and engineering knowledge figure prominently and support a cause-and-effect mentality that has grown ubiquitous in the computing era, with its push-button inputs and outputs. However, the high modern outlook is marred by overconfidence in our competence to avoid disaster, often of our own making. Consider the abject failure of 20th-century institutions to handle geopolitical conflict without devolving into world war and multiple genocides. Or witness periodic crashes of financial markets, two major nuclear accidents, and numerous space shuttles and rockets destroyed. Though all entail risk, high-profile failures showcase our overconfidence. Right now, engineers (software and hardware) are confident they can deliver safe self-driving vehicles yet are blithely ignoring (says me, maybe not) major ethical dilemmas regarding liability and technological unemployment. Those are apparently problems for someone else to solve.

Since the start of the Industrial Revolution, we’ve barrelled headlong into one sort of risk after another, some recognized at the time, others only apparent after the fact. Nuclear weapons are the best example, but many others exist. The one I raise frequently is the live social experiment undertaken with each new communications technology (radio, cinema, telephone, television, computer, social networks) that upsets and destabilizes social dynamics. The current ruckus fomented by the radical left (especially in the academy but now infecting other environments) regarding silencing of free speech (thus, thought policing) is arguably one concomitant.

According to Giddens, the character of modern risk contrasts with that of the premodern. The scale of risk prior to the 17th century was contained and expectation of social continuity was strong. Risk was also transmuted through magical thinking (superstition, religion, ignorance, wishfulness) into providential fortuna or mere bad luck, which led to feelings of relative security rather than despair. Modern risk has now grown so widespread, consequential, and soul-destroying, situated at considerable remove leading to feelings of helplessness and hopelessness, that those not numbed by the litany of potential worries afflicting daily life (existential angst or ontological insecurity) often develop depression and other psychological compulsions and disturbances. Most of us, if aware of globalized risk, set it aside so that we can function and move forward in life. Giddens says that this conjures up anew a sense of fortuna, that our fate is no longer within our control. This

relieves the individual of the burden of engagement with an existential situation which might otherwise be chronically disturbing. Fate, a feeling that things will take their own course anyway, thus reappears at the core of a world which is supposedly taking rational control of its own affairs. Moreover, this surely exacts a price on the level of the unconscious, since it essentially presumes the repression of anxiety. The sense of dread which is the antithesis of basic trust is likely to infuse unconscious sentiments about the uncertainties faced by humanity as a whole. [p. 133]

In effect, the nature of risk has come full circle (completed a revolution, thus, revolutionized risk) from fate to confidence in expert control and back to fate. Of course, a flexibility of perspective is typical as situation demands — it’s not all or nothing — but the overarching character is clear. Giddens also provides this quote by Susan Sontag that captures what he calls the low-probability, high-consequence character of modern risk:

A permanent modern scenario: apocalypse looms — and it doesn’t occur. And still it looms … Apocalypse is now a long-running serial: not ‘Apocalypse Now,’ but ‘Apocalypse from now on.’ [p. 134]

rant on/

Four years, ago, the Daily Mail published an article with the scary title “HALF the world’s wild animals have disappeared in 40 years” [all caps in original just to grab your eyeballs]. This came as no surprise to anyone who’s been paying attention. I blogged on this very topic in my review of Vaclav Smil’s book Harvesting the Biosphere, which observed at the end a 50% decrease in wild mammal populations in the last hundred years. The estimated numbers vary according to which animal population and what time frame are under consideration. For instance, in 2003, CNN reported that only 10% of big ocean fish remain compared to 47 years prior. Predictions indicate that the oceans could be without any fish by midcentury. All this is old news, but it’s difficult to tell what we humans are doing about it other than worsening already horrific trends. The latest disappearing act is flying insects, whose number have decreased by 75% in the last 25 years according to this article in The Guardian. The article says, um, scientists are shocked. I don’t know why; these articles and indicators of impending ecological collapse have been appearing regularly for decades. Similar Malthusian prophesies are far older. Remember colony collapse disorder? Are they surprised it’s happening now, as opposed to the end of the 21st century, safely after nearly everyone now alive is long dead? C’mon, pay attention!

Just a couple days ago, the World Meteorological Association issued a press release indicating that greenhouse gases have surged to a new post-ice age record. Says the press release rather dryly, “The abrupt changes in the atmosphere witnessed in the past 70 years are without precedent.” You don’t say. Even more astoundingly, the popular online news site Engadget had this idiotic headline: “Scientists can’t explain a ‘worrying’ rise in methane levels” (sourcing Professor Euan Nisbet of Royal Holloway University of London). Um, what’s to explain? We’ve been burning the shit out of planetary resources, temperatures are rising, and methane formerly sequestered in frozen tundra and below polar sea floors is seeping out. As I said, old news. How far up his or her ass has any reputable scientist’s head got to be to make such an outta-touch pronouncement? My answer to my own question: suffocation. Engadget made up that dude just for the quote, right? Nope.

Not to draw too direct a connection between these two issues (wildlife disappearances and greenhouse gases — hey, I said pay attention!) because, ya know, reckless conjecture and unproven conclusions (the future hasn’t happened yet, duh, it’s the future, forever telescoping away from us), but a changing ecosystem means evolutionary niches that used to support nature’s profundity are no longer doing so reliably. Plus, we just plain ate a large percentage of the animals or drove them to extinction, fully or nearly (for now). As these articles routinely and tenderly suggest, trends are “worrying” for humans. After all, how are we gonna put seafood on our plates when all the fish have been displaced by plastic?

rant off/