Posts Tagged ‘Pose of Objectivity’

Another from Hari Kunzru’s “Easy Chair” column, this time the July 2022 issue of Harper’s Magazine:

We might hear in [Thomas] Nagel’s cosmic detachment an echo of anatta — the Buddhist doctrine that there is no essence or soul grounding human existence. For Buddhists, the clear light of reality is visible only to those who abandon the illusion of selfhood. Objectivity, in the way non-Buddhists usually think about it, doesn’t erase the self, even if it involves a flight from individuality. It actually seems to make the self more powerful, more authoritative. The capacity to be objective is seen as something to strive for, an overcoming of the cognitive biases that smear or smudge the single window and impart our ability to see the world “as it really is.” Objectivity is earned through rigor and discipline. It is selfhood augmented.

anattā

I admit (again) to being bugged by things found on YouTube — a miserable proxy for the marketplace of ideas — many of which are either dumb, wrongheaded, or poorly framed. It’s not my goal to correct every mistake, but sometimes, inane utterances of intellectuals and specialists I might otherwise admire just stick in my craw. It’s hubris on my part to insist on my understandings, considering my utter lack of standing as an acknowledged authority, but I’m not without my own multiple areas of expertise (I assert immodestly).

The initial purpose for this blog was to explore the nature of consciousness. I’ve gotten badly sidetracked writing about collapse, media theory, epistemology, narrative, and cinema, so let me circle back around. This is gonna be long.

German philosopher Oswald Spengler takes a crack at defining consciousness:

Human consciousness is identical with the opposition between the soul and the world. There are gradations in consciousness, varying from a dim perception, sometimes suffused by an inner light, to an extreme sharpness of pure reason that we find in the thought of Kant, for whom soul and world have become subject and object. This elementary structure of consciousness is not capable of further analysis; both factors are always present together and appear as a unity.

(more…)

Didn’t expect to come back to this one so soon, but an alternative meaning behind my title just appeared. Whereas the first post was about cancel culture, this redux is about finding people willing and able to act as mouthpieces for whatever narrative the powers that be wish to foist on the public, as in “Where do they dig up these characters people?”

Wide-ranging opinion is not difficult to obtain in large populations, so although plenty of folks are willing to be paid handsomely to mouth whatever words are provided to them (e.g., public relations hacks, social media managers, promoters, spokespersons, actors, and straight-up shills in advertisements of all sorts), a better approach is simply to find people who honestly believe the chosen narrative so that they can do others’ bidding guilelessly, which is to say, without any need of selling their souls. This idea first came to my attention in an interview (can’t remember the source) given by Noam Chomsky where is chided the interviewer, who had protested that no one was telling him what to say, by observing that if he didn’t already share the desired opinion, he wouldn’t have the job. The interviewer was hired and retained precisely because he was already onboard. Those who depart from the prescribed organizational perspective are simply not hired, or if their opinions evolve away from the party line, they are fired. No need to name names, but many have discovered that journalistic objectivity (or at least a pose of objectivity) and independent thought are not high values in the modern media landscape.

Here’s a good example: 19-year-old climate change denier/skeptic Naomi Seibt is being billed as the anti-Greta Thunberg. No doubt Seibt believes the opinions she will be presenting at the Heartland Institute later this week. All the more authenticity if she does. But it’s a little suspicious, brazen and clumsy even, that another European teenage girl is being raised up to dispel Time Magazine‘s 2019 Person of the Year, Greta Thunberg. Maybe it’s even true, as conspiracists suggest, that Thunberg herself is being used to drive someone else’s agenda. The MSM is certainly using her to drive ratings. These questions are all ways to distract from the main point, which is that we’re driving ourselves to extinction (alongside most of the rest of the living world) by virtue of the way we inhabit the planet and consume its finite resources.

Here’s a second example: a “debate” on the subject of socialism between economists Paul Krugman and Richard Wolff on PBS‘s show Democracy Now!

 

Let me disclose my biases up front. I’ve never liked economists as analysts of culture, sociology, or electoral politics. Krugman in particular has always read like more of an apologist for economic policies that support the dysfunctional status quo, so I pay him little attention. On the other hand, Wolff has engaged his public as a respectable teacher/explainer of the renewed socialist movement of which he is a part, and I give him my attention regularly. In truth, neither of these fellow needed to be “dug up” from obscurity. Both are heavily covered in the media, and they did a good job not attacking each other while making their cases in the debate.

The weird thing was how Krugman is so clearly triggered by the word socialism, even though he acknowledges that the U.S. has many robust examples of socialism already. He was clearly the one designated to object to socialism as an ideology and describes socialism as an electoral kiss of death. Maybe he has too many childhood memories of ducking, covering, and cowering during those Atomic Era air raid drills and so socialism and communism were imprinted on him as evils never to be entertained. At least three generations after him lack those memories, however, and are not traumatized by the prospect of socialism. In fact, that’s what the Democratic primaries are demonstrating: no fear but rather enthusiastic support for the avowed Democratic Socialist on the ballots. Who are the fearful ones? Capitalists. They would be wise to learn sooner than later that the public, as Wolff says plainly, is ready for change. Change is coming for them.

Third version of this topic. Whereas the previous two were about competing contemporary North American ways of knowing, this one is broader in both time and space.

The May 2019 issue of Harper’s Magazine has a fascinating review of Christina Thompson’s book Sea People: The Puzzle of Polynesia (2019). Beyond the puzzle itself — how did Polynesian people migrate to, settle, and populate the far-flung islands of the Central and South Pacific? — the review hits upon one of my recurring themes on this blog, namely, that human cognition is plastic enough to permit highly divergent ways of knowing.

The review (and book?) is laden with Eurocentric detail about the “discovery” of closely related Polynesian cultures dispersed more widely (geographically) than any other culture prior to the era of mass migration. Indeed, the reviewer chides the author at one point for transforming Polynesia from a subject in its own right into an exotic object of (Western) fascination. This distorted perspective is commonplace and follows from the earlier “discovery” and colonization of North America as though it were not already populated. Cartographers even today are guilty of this Eurocentrism, relegating “empty” expanses of the Pacific Ocean to irrelevance in maps when in fact the Pacific is “the dominant feature of the planet” and contains roughly twenty-five thousand islands (at current sea level? — noting that sea level was substantially lower during the last ice age some 13,000 years but due to rise substantially by the end of this century and beyond, engulfing many of the islands now lying dangerously close to sea level). Similar distortions are needed to squash the spherical (3D) surface of the globe onto planar (2D) maps (e.g., the Mercator projection, which largely ignores the Pacific Ocean in favor of continents; other projections shown here) more easily conceptualized (for Westerners) in terms of coordinate geometry using latitude and longitude (i.e., the Cartesian plane).

The review mentions the familiar dichotomy of grouping a hammer, saw, hatchet, and log in terms of abstract categories (Western thought) vs. utility or practicality (non-Western). Exploration of how different ways of knowing manifest is, according to the review, among the more intellectually exciting parts of the book. That’s the part I’m latching onto. For instance, the review offers this:

Near the middle of Sea People, Thompson explores the ramification of Polynesia as, until contact, an oral culture with “an oral way of seeing.” While writing enables abstraction, distancing, and what we generally call objectivity, the truth of oral cultures is thoroughly subjective. Islands aren’t dots on a map seen from the sky but destinations one travels to in the water.

This is the crux of the puzzle of Polynesians fanning out across the Pacific approximately one thousand years ago. They had developed means of wayfinding in canoes and outriggers without instruments or maps roughly 500 years prior to Europeans crossing the oceans in sailing ships. Perhaps I’m reading too much into the evidence, but abstraction and objectivity as a particular way of knowing, bequeathed to Western Europe via the Enlightenment and development of the scientific method, stunted or delayed exploration of the globe precisely because explorers began with a god’s eye view of the Earth from above rather than from the surface (object vs. subject). In contrast, quoting here from the book rather than the review, Polynesians used

a system known as etak, in which they visualize a “reference island,” — which is usually a real island but may also be imaginary — off to one side of the path they are following, about midway between their starting point and their destination. As the journey progresses, this island “moves” under each of the stars in the star path [situated near the horizon rather than overhead], while the canoe in which the voyagers are traveling stays still. Of course, the navigators know that it is the canoe and not the islands that are moving, but this is the way they conceptualize the voyage.

Placing oneself at the center of the world or universe — at least for the purpose of navigation — is a conceptual pose Westerners discarded when heliocentrism gradually replaced geocentrism. (Traveling using GPS devices ironically places the traveler back at the center of the map with terrain shifting around the vehicle, but it’s a poor example of wayfinding precisely because the traveler fobs the real work onto the device and likely possesses no real understanding or skill traversing the terrain besides following mechanical instructions.) While we Westerners might congratulate ourselves for a more accurate, objective orientation to the stars, its unwitting limitations are worth noting. Recent discoveries regarding human prehistory, especially megalithic stone construction accomplished with techniques still unknown and flatly impossible with modern technology, point to the existence of other ways of knowing lost to contemporary human cultures steadily triangulating on and conforming to Western thought (through the process of globalization). Loss of diversity of ways of knowing creates yet another sort of impoverishment that can only be barely glimpsed since most of us are squarely inside the bubble. Accordingly, it’s not for nothing that some unusually sensitive critics of modernity suggest we’re entering a new Dark Age.

 

I caught the presentation embedded below with Thomas L. Friedman and Yuval Noah Harari, nominally hosted by the New York Times. It’s a very interesting discussion but not a debate. For this now standard format (two or more people sitting across from each other with a moderator and an audience), I’m pleased to observe that Friedman and Harari truly engaged each others’ ideas and behaved with admirable restraint when the other was speaking. Most of these talks are rude and combative, marred by constant interruptions and gotchas. Such bad behavior might succeed in debate club but makes for a frustratingly poor presentation. My further comments follow below.

With a topic as open-ended as The Future of Humanity, arguments and support are extremely conjectural and wildly divergent depending on the speaker’s perspective. Both speakers here admit their unique perspectives are informed by their professions, which boils down to biases borne out of methodology, and to a lesser degree perhaps, personality. Fair enough. In my estimation, Harari does a much better job adopting a pose of objectivity. Friedman comes across as both salesman and a cheerleader for human potential.

Both speakers cite a trio of threats to human civilization and wellbeing going forward. For Harari, they’re nuclear war, climate change, and technological disruption. For Friedman, they’re the market (globalization), Mother Nature (climate change alongside population growth and loss of diversity), and Moore’s Law. Friedman argues that all three are accelerating beyond control but speaks of each metaphorically, such as when he refers to changes in market conditions (e.g., from independent to interdependent) as “climate change.” The biggest issue from my perspective — climate change — was largely passed over in favor of more tractable problems.

Climate change has been in the public sphere as the subject of considerable debate and confusion for at least a couple decades now. I daresay it’s virtually impossible not to be aware of the horrific scenarios surrounding what is shaping up to be the end of the world as we know it (TEOTWAWKI). Yet as a global civilization, we’ve barely reacted except with rhetoric flowing in all directions and some greenwashing. Difficult to assess, but perhaps the appearance of more articles about surviving climate change (such as this one in Bloomberg Businessweek) demonstrates that more folks recognize we can no longer stem or stop climate change from rocking the world. This blog has had lots to say about the collapse of industrial civilization being part of a mass extinction event (not aimed at but triggered by and including humans), so for these two speakers to cite but then minimize the peril we face is, well, façile at the least.

Toward the end, the moderator finally spoke up and directed the conversation towards uplift (a/k/a the happy chapter), which almost immediately resulted in posturing on the optimism/pessimism continuum with Friedman staking his position on the positive side. Curiously, Harari invalidated the question and refused to be pigeonholed on the negative side. Attempts to shoehorn discussions into familiar if inapplicable narratives or false dichotomies are commonplace. I was glad to see Harari calling bullshit on it, though others (e.g., YouTube commenters) were easily led astray.

The entire discussion is dense with ideas, most of them already quite familiar to me. I agree wholeheartedly with one of Friedman’s remarks: if something can be done, it will be done. Here, he refers to technological innovation and development. Plenty of prohibitions throughout history not to make available disruptive technologies have gone unheeded. The atomic era is the handy example (among many others) as both weaponry and power plants stemming from cracking the atom come with huge existential risks and collateral psychological effects. Yet we prance forward headlong and hurriedly, hoping to exploit profitable opportunities without concern for collateral costs. Harari’s response was to recommend caution until true cause-effect relationships can be teased out. Without saying it manifestly, Harari is citing the precautionary principle. Harari also observed that some of those effects can be displaced hundreds and thousands of years.

Displacements resulting from the Agrarian Revolution, the Scientific Revolution, and the Industrial Revolution in particular (all significant historical “turnings” in human development) are converging on the early 21st century (the part we can see at least somewhat clearly so far). Neither speaker would come straight out and condemn humanity to the dustbin of history, but at least Harari noted that Mother Nature is quite keen on extinction (which elicited a nervous? uncomfortable? ironic? laugh from the audience) and wouldn’t care if humans were left behind. For his part, Friedman admits our destructive capacity but holds fast to our cleverness and adaptability winning out in the end. And although Harari notes that the future could bring highly divergent experiences for subsets of humanity, including the creation of enhanced humans due to reckless dabbling with genetic engineering, I believe cumulative and aggregate consequences of our behavior will deposit all of us into a grim future no sane person should wish to survive.

Continuing from part 1, which is altogether too much screed and frustration with Sam Harris, I now point to several analyses that support my contentions. First is an article in The Nation about the return of so-called scientific racism and speaks directly about Charles Murray, Sam Harris, and Andrew Sullivan, all of whom are embroiled in the issue. Second is an article in The Baffler about constructing arguments ex post facto to conform to conclusions motivated in advance of evidence. Most of us are familiar with the the constructed explanation, where in the aftermath of an event, pundits, press agents, and political insiders propose various explanatory narratives to gain control over what will eventually become the conventional understanding. Published reports such as the Warren Commission‘s report on the assassination of JFK is one such example, and I daresay few now believe the report and the consensus that it presents weren’t politically motivated and highly flawed. Both linked articles above are written by Edward Burmilla, who blogs at Gin and Tacos (see blogroll). Together, they paint a dismal picture of how reason and rhetoric can be corrupted despite the sheen of scientific respectability.

Third is an even more damaging article (actually a review of the new anthology Trump and the Media) in the Los Angeles Review of Books by Nicolas Carr asking the pointed question “Can Journalism Be Saved?” Admittedly, journalism is not equivalent with reason or rationalism, but it is among several professions that employ claims of objectivity, accuracy, and authority. Thus, journalism demands both attention and respect far in excess of the typical blogger (such as me) or watering-hole denizen perched atop a barstool. Consider this pullquote:

… the flaws in computational journalism can be remedied through a more open and honest accounting of its assumptions and limitations. C. W. Anderson, of the University of Leeds, takes a darker view. To much of the public, he argues, the pursuit of “data-driven objectivity” will always be suspect, not because of its methodological limits but because of its egghead aesthetics. Numbers and charts, he notes, have been elements of journalism for a long time, and they have always been “pitched to a more policy-focused audience.” With its ties to social science, computational journalism inevitably carries an air of ivory-tower elitism, making it anathema to those of a populist bent.

Computational journalism is contrasted with other varieties of journalism based on, say, personality, emotionalism, advocacy, or simply a mad rush to print (or pixels) to scoop the competition. This hyperrational approach has already revealed its failings, as Carr reports in his review.

What I’m driving at is that, despite frequent appeals to reason, authority, and accuracy (especially the quantitative sort), certain categories of argumentation fail to register on the average consumer of news and information. It’s not a question of whether arguments are right or wrong, precisely; it’s about what appeals most to those paying even a modest bit of attention. And the primary appeal for most (I judge) isn’t reason. Indeed, reason is swept aside handily when a better, um, reason for believing something appears. If one has done the difficult work of acquiring critical thinking and reasoning skills, it can be quite the wake-up call when others fail to behave according to reason, such as with acting against enlightened self-interest. The last presidential election was a case in point.

Circling back so something from an earlier blog, much of human cognition is based on mere sufficiency: whatever is good enough in the moment gets nominated then promoted to belief and/or action. Fight, flight, or freeze is one example. Considered evaluation and reason are not even factors. Snap judgments, gut feelings, emotional resonances, vibes, heuristics, and Gestalts dominate momentary decision-making, and in the absence of convincing countervailing information (if indeed one is even vulnerable to reason, which would be an unreasonable assumption), action is reinforced and suffices as belief.

Yet more in part 3 to come.

Oddly, there is no really good antonym for perfectionism. Suggestions include sloppiness, carelessness, and disregard. I’ve settled on approximation, which carries far less moral weight. I raise the contrast between perfectionism and approximation because a recent study published in Psychological Bulletin entitled “Perfectionism Is Increasing Over Time: A Meta-Analysis of Birth Cohort Differences From 1989 to 2016″ makes an interesting observation. Here’s the abstract:

From the 1980s onward, neoliberal governance in the United States, Canada, and the United Kingdom has emphasized competitive individualism and people have seemingly responded, in kind, by agitating to perfect themselves and their lifestyles. In this study, the authors examine whether cultural changes have coincided with an increase in multidimensional perfectionism in college students over the last 27 years. Their analyses are based on 164 samples and 41,641 American, Canadian, and British college students, who completed the Multidimensional Perfectionism Scale (Hewitt & Flett, 1991) between 1989 and 2016 (70.92% female, Mage = 20.66). Cross-temporal meta-analysis revealed that levels of self-oriented perfectionism, socially prescribed perfectionism, and other-oriented perfectionism have linearly increased. These trends remained when controlling for gender and between-country differences in perfectionism scores. Overall, in order of magnitude of the observed increase, the findings indicate that recent generations of young people perceive that others are more demanding of them, are more demanding of others, and are more demanding of themselves.

The notion of perfection, perfectness, perfectibility, etc. has a long tortured history in philosophy, religion, ethics, and other domains I won’t even begin to unpack. From the perspective of the above study, let’s just say that the upswing in perfectionism is about striving to achieve success, however one assesses it (education, career, relationships, lifestyle, ethics, athletics, aesthetics, etc.). The study narrows its subject group to college students (at the outset of adult life) between 1989 and 2016 and characterizes the social milieu as neoliberal, hyper-competitive, meritocratic, and pressured to succeed in a dog-eat-dog environment. How far back into childhood results of the study (agitation) extend is a good question. If the trope about parents obsessing and competing over preschool admission is accurate (may be just a NYC thang), then it goes all the way back to toddlers. So much for (lost) innocence purchased and perpetuated through late 20th- and early 21st-century affluence. I suspect college students are responding to awareness of two novel circumstances: (1) likelihood they will never achieve levels of success comparable to their own parents, especially financial (a major reversal of historical trends), and (2) recognition that to best enjoy the fruits of life, a quiet, reflective, anonymous, ethical, average life is now quite insufficient. Regarding the second of these, we are inundated by media showing rich celebrities (no longer just glamorous actors/entertainers) balling out of control, and onlookers are enjoined to “keep up.” The putative model is out there, unattainable for most but often awarded by randomness, undercutting the whole enterprise of trying to achieve perfection.

(more…)

We all live in perceptual bubbles of varying breadth and focus. Otherwise, we would be omniscient, which none of us is or can be. Two hot topics that lie outside my perceptual bubble are geopolitical struggles in Israel and Northern Ireland. I’ve also read analyses that suggest that our current troubles and involvements in the Middle East are part of a clash of cultures going back two millennia, where the mostly Christian West won the battle back in the Middle Ages but newly gained oil wealth in the Middle East has prompted a resumption of hostilities. I have a mixture of opinions passing acquaintance with geopolitics, and the complexity of the myriad interacting elements keeps me from getting a good fix on what’s proven to be a constantly shifting target. That aspect of modern history is the domain of intelligence agencies, military strategists, and diplomats. I don’t necessarily trust those professionals, though, since they operate with their own perceptual biases. (When your main tool is a bomb hammer, everything tends to look like a target nail.) But I also recognize that I’m in a really lousy position to second-guess or drive from the back seat. Plus, I have zero influence, even at the voting booth.

In the narrower arena of domestic and campaign politics, the news media (journalists) have failed in their legitimate role as the fourth estate, which function is now being performed by their cousins in entertainment media. (I’ll skip the diatribe that journalism has essentially merged with entertainment and utterly lost any claim to objectivity.) Specifically, we live in a surprisingly mature age of political satire replete with shows that deliver news in comic form far better than serious journalists do with straight faces. The model is undoubtedly The Daily Show, which has already spun off The Colbert Report, Last Week Tonight, Full Frontal, and The Nightly Show. Each of these shows features a host considerably smarter than the audience, who proceeds with rapid-fire (though scripted) takedowns of all manner of political dysfunction. Each has its own stylistic tics, but in aggregate, they arguably do a better job of investigative journalism these days than, say, 60 Minutes, Dateline, or 20/20. Better yet, since they don’t pretend to be serious journalism, they can dispense with bogus claims to objectivity and simply go ahead to indulge in righteous indignation and silly stunts, exposing corruption, stupidity, and inanity in all their shameful manifestations. Political humor has now become a form of gallows humor.

(more…)

Backtracking to something in The Master and His Emissary I read a more than two months ago, McGilchrist has a fairly involved discussion of Julian Jaynes’ book The Origin of Consciousness in the Breakdown of the Bicameral Mind. I read Jaynes more than a decade ago and was pretty excited by his thesis, which I couldn’t then evaluate or assess very well. (I’m probably not much better equipped now.) Amazon.com reveals that there are other reviews and updates of Jaynes’ work since its publication in 1979, but I was unaware of them until just now. I was pleased to find McGilchrist give so much attention to Jaynes — a discussion spanning 4 pp. with the benefit of several decades of further research. I will provide McGilchrist’s summary of Jaynes’ highly original and creative thesis rather than rely on memory more than a decade old:

… [C]onsciousness, in the sense of introspective self-awareness, first arose in Homeric Greece. He [Jaynes] posits that, when the heroes of the Iliad (and the Old Testament) are reported as having heard the voices of the gods (or God) giving them commands or advice, this is not a figurative expression: they literally heard voices. The voices were speaking their own intuitive thoughts, and arose from their own minds, but were perceived as external, because at this time man was becoming newly aware of his own (hitherto unconscious) intuitive thought processes.

If one accepts (as I believe one should) that the ancient mind was fundamentally different from the modern mind, the latter of which was just beginning to coalesce at the time of the ancient Greeks (ca. 8th century BCE), this explains why all the sword-and-sandal movie epics get characters fundamentally wrong by depicting heroes especially but others as well with the purposefulness and self-possession of modern thinkers well before such qualities were established in antiquity. Antiquity is not prehistory, however, so there’s no danger of ancients being depicted as cavemen grunting and gesticulating without the benefit of language (except perhaps when they’re presented in stylized fashion as voiceless barbarians). But in typical modern gloss on centuries long past, there is little consideration of a middle ground or extended transition between modern consciousness and protoconsciousness (not unlike the transition from protolanguage to myriad languages of amazing sophistication). This is why Jaynes was so exciting when I first read him: he mapped, provisionally perhaps, how we got here from there.

McGilchrist believes that while the description above is accurate, Jaynes’ supporting details stem from a faulty premise, borne of an unfortunate mischaracterization of schizophrenia that was current in the 1970s in psychology and psychiatry. Never mind that schizophrenia is an affliction only a couple centuries old; the misunderstanding is that schizophrenics suffer from accentuated emotionalism and withdrawal into the body or the sensorium when in fact they are hyperrational and alienated from the body. The principal point of comparison between ancients and modern schizophrenics is that they both hear voices, but that fact arises from substantially different contexts and conditions. For Jaynes, hearing voices in antiquity came about because the unified brain/mind broke down into hemispheric competition where failure to cooperate resulted in a sort of split mind. According to McGilchrist, there was indeed a split mind at work, but not the one Jaynes believed. Rather, the split mind is the subject/object or self/other distinction, something readers of this blog may remember I have cited repeatedly as having initially developed in the ancient world. (Whether this is my own intuition or a synthesis of lots of reading and inquiry into historical consciousness is impossible for me to know anymore and unimportant anyway.) McGilchrist describes the subject/object distinction as the ability to objectify and to hold an object or idea as a “necessary distance” in the mind to better apprehend it, which was then generalized to the self. Here is how McGilchrist describes Jaynes’ error:

Putting it at its simplest, where Jaynes interprets the voices of the gods as being due to the disconcerting effects of the opening of a door between the hemispheres, so that the voices could for the first time be heard, I seen them as being due to the closing of the door, so that the voices of intuition now appear distant, ‘other’; familiar but alien, wise but uncanny — in a word, divine.

What’s missing from McGilchrist’s reevaluation of Jaynes is how hearing voices in the ancient world may also account for the rise of polytheism and how the gradual disappearance of those same voices as modern consciousness solidified led to monotheism, an artifact of the transitional mind of antiquity that survived into modernity. I lack the anthropological wherewithal to survey ancient civilizations elsewhere in the Middle East (such as Egypt) or in Asia (such as China), but it seems significant to me that spiritual alternatives beyond the three Abrahamic religions are rooted in animism (e.g., sun, moon, other animals, Nature) or what could be called lifeways (e.g., Taoism and Buddhism) and lack father and mother figureheads. (Mother Nature doesn’t really compare to traditional personification of sky gods.) This omission is understandably outside the scope of The Master and His Emissary, but it would have been interesting to read that discussion had it been included. Another interesting omission is how habituation with these inner voices eventually became the ongoing self-narrative we all know: talking to ourselves inside our heads. Modern thinkers readily recognize the self talking to itself, which is the recursive nature of self-awareness, and loss of proper orientation and self-possession are considered aberrant — crazy unless one claims to hear the voice of god (which strangely no one believes even if they believe in god). In short, god (or the gods) once spoke directly to us, but no longer.

For me, these observations are among the pillars of modern consciousness, an ever-moving puzzle picture I’ve been trying to piece together for years. I don’t mean to suggest that there are three large bands of historical consciousness, but it should be clear that we were once in our evolutionary history nonconscious (not unconscious — that’s something else) but developed minds/selves over the eons. As with biology and language, there is no point of arrival where one could say we are now fully developed. We continue to change constantly, far more quickly with language and consciousness than with biology, but there are nonetheless several observable developmental thresholds. The subject/object distinction from antiquity is one that profoundly informs modern consciousness today. Indeed, the scientific method is based on objectification. This intellectual pose is so powerful and commonplace (but not ubiquitous) that immersion, union, and loss of self is scarcely conceivable outside of a few special circumstances that render us mostly nonthinking, such as being in the zone, flow, sexual congress, religious ecstasy, etc., where the self is obliterated and we become “mindless.”

We have embraced the machinery of our undoing as recreation.
—Joe Bageant

The subject of industrial and civilizational collapse has wandered in and out of the front of my brain for years now; it’s never far from my attention. I’ve blogged about it numerous times, but I almost always retreat immediately afterward from the anticipated horror. As knowledge of our shared dilemma (now unavoidable fate) diffuses through the population, I am vaguely heartened to see that some few others really do get it and are sounding the alarm and sharing their ideas, analyses, and preparations for the future. So it’s no surprise when I become newly aware of someone who has essentially spent a lifetime piecing together the puzzle and in the process become fairly well known as an activist, educator, writer, or simply a whistleblower on the subject of collapse. Michael Ruppert, the subject of the 2009 documentary film Collapse, is my most recent such discovery.

mv5bmzu5mde5mje4mv5bml5banbnxkftztcwmtazmtq1mw-_v1_

.
Recounting the arguments Ruppert makes in the film in favor of understanding what is happening in and to the world at this particular junction in history is frankly unnecessary. As Ruppert says in the film, there is no longer any point to those debates. Either you are aware and convinced or you are unaware and/or in denial. Further debate solves nothing. What interests me more is the way the issue is framed by the filmmakers and how film critics have responded in print.

When Al Gore appeared in An Inconvenient Truth, it was revealed that he had been shaping his presentation on anthropogenic global warming over a period of decades and that the film was the fruition of years of effort that had matured and ripened in terms of the message, the underlying science, and the readiness of the public to listen (but not yet to act). So a good portion of the film was about Al Gore getting out the message, which is typical of politicians whose eyes always stray to the campaign angle of whatever cause they are pushing. The irony was that Al Gore’s political career was already over, yet he couldn’t resist being a central part of the story. “Look mom, no hands!”

In Collapse, Michael Ruppert is revealed to have been on a similar lifelong quest to discover the truth and get out the message. Accordingly, he occupies the center of the story not just as the lone talking head in a stark yet dramatically lit room (reminiscent of a criminal interrogation room) but as the impassioned, charismatic voice of Cassandra doomed to be ignored for his dark, unsavory prophecy — except that he’s not quite being ignored. (He points several times to power players in government and industry acting on his conclusions but refusing to be honest or validate his message.) The collapse of global civilization may not be the biggest story in the history of mankind, but if not, it’s certainly the most immediately relevant. Appallingly, the film is framed predominantly as a human story: the story of Michael Ruppert. Maybe this can be excused as synecdoche — the story of Ruppert is the story of all of us — but I suspect instead that the filmmakers find Ruppert’s story more engaging and entertaining as a documentary subject than the ongoing collapse, which is dragged onto stage mostly by Ruppert.

This myopic view also accounts in part for the inability of film critics to engage with and evaluate the content of the film beyond Ruppert himself as a quirky subject. I haven’t read all of the reviews, but those I have read follow the template of repeating some of Ruppert’s assertions for context to support adjectival blurbs such as mesmerizing, compelling, terrifying, ominous, riveting, chilling, horrifying, etc., which are probably also applicable to Ruppert himself. Yet the reviewers can’t seem to overcome the myth of journalistic objectivity to say, “well duh, this stuff is so convincing and obvious it absolutely demands we add our voices to his and warn of what’s to come.” Instead, one reviewer after the next hedges behind words like possible, plausible, seemingly persuasive, etc. “Golly, I’m just a dumb film critic, hardly even a real journalist. I can’t understand anything if not filtered through a celluloid lens.” The filmmakers, too, reveal some of this same bullshit report-the-controversy attitude when an off-screen voice asks Ruppert something to the effect “Who are you to be stating such dark conclusions? What makes you qualified to offer an opinion?” Earlier in the film, Ruppert described himself as unusually adept at critical thinking. That’s the answer he might have given. But in the heat of confrontation, his actual answer was disappointing. I wish he had replied, “What qualifications do you think I need to describe reality accurately?”

This, of course, is the crux of the documentary. Not being a movie critic, I didn’t know that the filmmakers tend to take as their subjects people who are regarded as kooks, freaks, weirdos, eccentrics, and obsessives. So from its inception, the film is another in a serious of profiles of strange folk who need not be taken any too seriously. This perspective is reinforced by imposing the question of credentials on the subject, and in turn, the movie critics all fall in line and agree that they, too, are unqualified to evaluate Ruppert’s statements but can only review the film for its entertainment value. Never mind that the arguments and underlying science take only modest intellectual wherewithal to recognize as truth. No, this truth, since we lack the courage to confront it with any integrity, is now being served up as spectacle: “lookie at the funny tin-foil-hat-wearing conspiracy theorist going on about doom. How wonderfully charming he is for our fun and enjoyment!” As it happens, Ruppert escapes being humiliated or shamed despite being lured into self-exposure and made the subtle object of derision. Withholding judgment as do the cowardly critics does not change the fact that Ruppert was set up for failure.

This is one reason among many why we have earned our fate.