Posts Tagged ‘science’

From the not-really-surprising-news category comes a New Scientist report earlier this month that the entire world was irradiated by follow-on effects of the Fukushima disaster. Perhaps it’s exactly as the article states: the equivalent of one X-ray. I can’t know with certainty, nor can bupkis be done about it by the typical Earth inhabitant (or the atypical inhabitant, I might add). Also earlier this month, a tunnel collapse at the Dept. of Energy’s Hanford nuclear waste storage site in Washington State gave everyone a start regarding possible or potential nearby release of radiation. Similar to Fukushima, I judge there is little by way of trust regarding accurate news or disclosure and fuck all anyone can do about any of it.

I’m far too convinced of collapse by now to worry too much about these Tinkerbells, knowing full well that what’s to come will be worse by many magnitudes of order when the firecrackers start popping due to inaction and inevitability. Could be years or decades away still; but as with other aspects of collapse, who knows precisely when? Risky energy plant operations and nuclear waste disposal issues promise to be with us for a very long time indeed. Makes it astonishing to think that we plunged full-steam ahead without realistic (i.e., politically acceptable) plans to contain the problems before creating them. Further, nuclear power is still not economically viable without substantial government subsidy. The likelihood of abandonment of this technological boondoggle seems pretty remote, though perhaps not as remote as the enormous expense of decommissioning all the sites currently operating.

These newsbits and events also reminded me of the despair I felt in 1986 on the heels of the Chernobyl disaster. Maybe in hindsight it’s not such a horrible thing to cede entire districts to nature for a period of several hundred years as what some have called exclusion or sacrifice zones. Absent human presence, such regions demonstrate remarkable resilience and profundity in a relatively short time. Still, it boggles the mind, doesn’t it, to think of two exclusion zones now, Chernobyl and Fukushima, where no one should go until, at the very least, the radioactive half-life has expired? Interestingly, that light at the end of the tunnel, so to speak, seems to be telescoping even farther away from the date of the disaster, a somewhat predictable shifting of the goalposts. I’d conjecture that’s because contamination has not yet ceased and is actually ongoing, but again, what do I know?

On a lighter note, all this also put me in mind of the hardiness of various foodstuffs. God knows we consume loads of crap that can hardly be called food anymore, from shelf-stable fruit juices and bakery items (e.g., Twinkies) that never go bad to not-cheese used by Taco Bell and nearly every burger joint in existence to McDonald’s burgers and fries that refuse to spoil even when left out for months to test that very thing. It give me considerable pause to consider that foodstuff half-lives have been radically and unnaturally extended by creating abominable Frankenfoods that beggar the imagination. For example, strawberries and tomatoes used to be known to spoil rather quickly and thus couldn’t withstand long supply lines from farm to table; nor were they available year round. Rather sensibly, people grew their own when they could. Today’s fruits and veggies still spoil, but interventions undertaken to extend their stability have frequently come at the expense of taste and nutrition. Organic and heirloom markets have sprung up to fill those niches, which suggest the true cost of growing and distributing everyday foods that will not survive a nuclear holocaust.

I picked up a copy of Daniel Siegel’s book Mind: A Journey to the Heart of Being Human (2017) to read and supplement my ongoing preoccupation with human consciousness. Siegel’s writing is the source of considerable frustration. Now about 90 pp. into the book (I am considering putting it aside), he has committed several grammatical errors (where are book editors these days?), doesn’t really know how to use a comma properly, and doesn’t write in recognizable paragraph form. He has a bad habit of posing questions to suggest the answers he wants to give and drops constant hints of something soon to be explored like news broadcasts that tease the next segment. He also deploys a tired, worn metaphor that readers are on a journey of discovery with him, embarked on a path, exploring a subject, etc. Yecch. (A couple Amazon reviews also note that grayish type on parchment (cream) paper poses a legibility problem due to poor contrast even in good light — undoubtedly not really Siegel’s fault.)

Siegel’s writing is also irritatingly circular, casting and recasting the same sentences in repetitious series of assertions that have me wondering frequently, “Haven’t I already read this?” Here are a couple examples:

When energy flows inside your body, can you sense its movement, how it changes moment by moment?

then only three sentences later

Energy, and energy-as-information, can be felt in your mental experience as it emerges moment by moment. [p. 52]

Another example:

Seeing these many facets of mind as emergent properties of energy and information flow helps link the inner and inter aspect of mind seamlessly.

then later in the same paragraph

In other words, mind seen this way could be in what seems like two places at once as inner and inter are part of one interconnected, undivided system. [p. 53]

This is definitely a bug, not a feature. I suspect the book could easily be condensed from 330 pp. to less than 200 pp. if the writing weren’t so self-indulgent of the author. Indeed, while I recognize a healthy dose of repetition is an integral part of narrative form (especially in music), Siegel’s relentless repetition feels like propaganda 101, where guileless insistence (of lies or merely the preferred story one seeks to plant in the public sphere) wears down the reader rather than convinces him or her. This is also marketing 101 (e.g., Coca-Cola, McDonald’s, Budweiser, etc. continuing to advertise what are by now exceedingly well-established brands).


This past Thursday was an occasion of protest for many immigrant laborers who did not show up to work. Presumably, this action was in response to recent executive attacks on immigrants and hoped to demonstrate how businesses would suffer without immigrant labor doing jobs Americans frequently do not want. Tensions between the ownership and laboring classes have a long, tawdry history I cannot begin to summarize. As with other contextual failures, I daresay the general public believes incorrectly that such conflicts date from the 19th century when formal sociopolitical theories like Marxism were published, which intersect heavily with labor economics. An only slightly better understanding is that the labor movement commenced in the United Kingdom some fifty years after the Industrial Revolution began, such as with the Luddites. I pause to remind that the most basic, enduring, and abhorrent labor relationship, extending back millennia, is slavery, which ended in the U.S. only 152 years ago but continues even today in slightly revised forms around the globe.

Thursday’s work stoppage was a faint echo of general strikes and unionism from the middle of the 20th century. Gains in wages and benefits, working conditions, and negotiating position transferred some power from owners to laborers during that period, but today, laborers must sense they are back on their heels, defending conditions fought for by their grandparents but ultimately losing considerable ground. Of course, I’m sympathetic to labor, considering I’m not in the ownership class. (It’s all about perspective.) I must also admit, however, to once quitting a job after only one day that was simply too, well, laborious. I had that option at the time, though it ultimately led nearly to bankruptcy for me — a life lesson that continues to inform my attitudes. As I survey the scene today, however, I suspect many laborers — immigrants and native-born Americans alike — have the unenviable choice of accepting difficult, strenuous labor for low pay or being unemployed. Gradual reduction of demand for labor has two main causes: globalization and automation.


I discovered “The Joe Rogan Experience” on YouTube recently and have been sampling from among the nearly 900 pod- or webcasts posted there. I’m hooked. Rogan is an impressive fellow. He clearly enjoys the life of the mind but, unlike many who are absorbed solely in ideas, has not ignored the life of the body. Over time, he’s also developed expertise in multiple endeavors and can participate knowledgeably in discussion on many topics. Webcasts are basically long, free-form, one-on-one conversations. This lack of structure gives the webcast ample time to explore topics in depth or simply meander. Guests are accomplished or distinguished in some way and usually have fame and wealth to match, which often affects content (i.e., Fitzgerald’s observation: “The rich are different than you and me”). One notable bar to entry is having a strong media presence.

Among the recurring themes, Rogan trots out his techno optimism, which is only a step short of techno utopianism. His optimism is based on two interrelated developments in recent history: widespread diffusion of information over networks and rapid advances in medical devices that can be expected to accelerate, to enhance human capabilities, and soon to transform us into supermen, bypassing evolutionary biology. He extols these views somewhat regularly to his guests, but alas, none of the guests I’ve watched seem to be able to fathom the ideas satisfactorily enough to take up the discussion. (The same is true of Rogan’s assertion that money is just information, which is reductive and inaccurate.) They comment or joke briefly and move onto something more comfortable or accessible. Although I don’t share Rogan’s optimism, I would totally engage in discussion of his flirtation with Transhumanism (a term he doesn’t use). That’s why I’m blogging here about Rogan, in addition to my lacking enough conventional distinction and fame to score an invite to be a guest on his webcast. Plus, he openly disdains bloggers, many of whom moderate comments (I don’t) or otherwise channel discussion to control content. Oh, well.


Caveat: Apologies for this overlong post, which random visitors (nearly the only kind I have besides the spambots) may find rather challenging.

The puzzle of consciousness, mind, identity, self, psyche, soul, etc. is an extraordinarily fascinating subject. We use various terms, but they all revolve around a unitary property and yet come from different approaches, methodologies, and philosophies. The term mind is probably the most generic; I tend to use consciousness interchangeably and more often. Scientific American has a entire section of its website devoted to the mind, with subsections on Behavior & Society, Cognition, Mental Health, Neurological Health, and Neuroscience. (Top-level navigation offers links to these sections: The Sciences, Mind, Health, Tech, Sustainability, Education, Video, Podcasts, Blogs, and Store.) I doubt I will explore very deeply because science favors the materialist approach, which I believe misses the forest through the trees. However, the presence of this area of inquiry right at the top of the page indicates how much attention and research the mind/consciousness is currently receiving.

A guest blog at Scientific American by Adam Bear entitled “What Neuroscience Says about Free Will” makes the fashionable argument (these days) that free will doesn’t exist. The blog/article is disclaimed: “The views expressed are those of the author(s) and are not necessarily those of Scientific American.” I find that a little weaselly. Because the subject is still wide open to interpretation and debate, Scientific American should simply offer conflicting points of view without worry. Bear’s arguments rest on the mind’s ability to revise and redate experience occurring within the frame of a few milliseconds to allow for processing time, also known as the postdictive illusion (the opposite of predictive). I wrote about this topic more than four years ago here. Yet another discussion is found here. I admit to being irritated that the questions and conclusions stem from a series of assumptions, primarily that whatever free will is must occur solely in consciousness (whatever that is) as opposed to originating in the subconscious and subsequently transferring into consciousness. Admittedly, we use these two categories — consciousness and the subconscious — to account for the rather limited amount of processing that makes it all the way into awareness vs. the significant amount that remains hidden or submerged. A secondary assumption, the broader project of neuroscience in fact, is that, like free will, consciousness is housed somewhere in the brain or its categorical functions. Thus, fruitful inquiry results from seeking its root, seed, or seat as though the narrative constructed by the mind, the stream of consciousness, were on display to an inner observer or imp in what Daniel Dennett years ago called the Cartesian Theater. That time-worn conceit is the so-called ghost in the machine. (more…)

The last time I blogged about this topic, I took an historical approach, locating the problem (roughly) in time and place. In response to recent blog entries by Dave Pollard at How to Save the World, I’ve delved into the topic again. My comments at his site are the length of most of my own blog entries (3–4 paras.), whereas Dave tends to write in chapter form. I’ve condensed to my self-imposed limit.

Like culture and history, consciousness is a moving train that yields its secrets long after it has passed. Thus, assessing our current position is largely conjectural. Still, I’ll be reckless enough to offer my intuitions for consideration. Dave has been pursuing radical nonduality, a mode of thought characterized by losing one’s sense of self and becoming selfless, which diverges markedly from ego consciousness. That mental posture, described elsewhere by nameless others as participating consciousness, is believed to be what preceded the modern mind. I commented that losing oneself in intense, consuming flow behaviors is commonplace but temporary, a familiar, even transcendent place we can only visit. Its appeals are extremely seductive, however, and many people want to be there full-time, as we once were. The problem is that ego consciousness is remarkably resilient and self-reinforcing. Despite losing oneself from time to time, we can’t be liberated from the self permanently, and pathways to even temporarily getting out of one’s own head are elusive and sometimes self-destructive.

My intuition is that we are fumbling toward just such a quieting of the mind, a new dark age if you will, or what I called self-lite in my discussion with Dave. As we stagger forth, groping blindly in the dark, the transitional phase is characterized by numerous disturbances to the psyche — a crisis of consciousness wholly different from the historical one described previously. The example uppermost in my thinking is people lost down the rabbit hole of their handheld devices and desensitized to the world beyond the screen. Another is the ruined, wasted minds of (arguably) two or more generations of students done great disservice by their parents and educational institutions at all levels, a critical mass of intellectually stunted and distracted young adults by now. Yet another is those radicalized by their close identification with one or more special interest groups, also known as identity politics. A further example is the growing prevalence of confusion surrounding sexual orientation and gender identity. In each example, the individual’s ego is confused, partially suppressed, and/or under attack. Science fiction and horror genres have plenty of instructive examples of people who are no longer fully themselves, their bodies zombified or made into hosts for another entity that takes up residence, commandeering or shunting aside the authentic, original self.

Despite having identified modern ego consciousness as a crisis and feeling no small amount of empathy for those seeking radical nonduality, I find myself in the odd position of defending the modern mind precisely because transitional forms, if I have understood them properly, are so abhorrent. Put another way, while I can see the potential value and allure of extinguishing the self even semi-permanently, I will not be an early adopter. Indeed, if the modern mind took millennia to develop as one of the primary evolutionary characteristics of homo sapiens sapiens, it seems foolish to presume that it can be uploaded into a computer, purposely discarded by an act of will, or devolved in even a few generations. Meanwhile, though the doomer in me recognizes that ego consciousness is partly responsible for bringing us to the brink of (self-)annihilation (financial, geopolitical, ecological), individuality and intelligence are still highly prized where they can be found.

Something I wrote ten years ago at Creative Destruction. Probably worth an update.

Creative Destruction

We’ve all see the reports. U.S. high schoolers rank at or near the bottom in math and science. Admittedly, that link is to a story eight years old, but I doubt rankings have changed significantly. A new study and report are due out next year. See this link.

What interests me is that we live in an era of unprecedented technological advancement. While the U.S. may still be in the vanguard, I wonder how long that can last when the source of inspiration and creativity — human knowledge and understanding — is dying at the roots in American schools. It’s a sad joke, really, that follow-the-directions instructions for setting the clock on a VCR (remember those?) proved so formidable for most end users that a time-setting function is built into more recent recording systems such as TIVO. Technical workarounds may actually enable ever-increasing levels of disability working with our…

View original post 834 more words

I chanced upon a dinner conversation of the type that tends to light me up, full of familiar assertions and brief citations that purport to make sense of the world but instead open up broad inquiries without ever resolving anything. Whereas all the hanging threads might be frustrating to others, I don’t mind that we leapt from subject to subject carelessly. Engagement with another’s intellect is what really fires me.

So in the course of the discussion, I argued (as devil’s advocate) that the discontinuity between various scales and timeframes renders subtle appreciation of the world and/or universe moot. Specifically, fine-grained knowledge that flows from hard sciences such as mathematics, biology, chemistry, and physics does not combine to form anything approaching a complete picture of reality in the mind of the average person. Soft sciences such as sociology, psychology, economics, anthropology, and history are as likely to confound and confuse as illuminate, considering their vulnerability to interpretative flexibility. Further, the extensive conjectural and theoretical complexity of cosmology and quantum sciences are so far out of scope for typical beer-swilling Joes as to be invisible. Even the basic distinction between the Euclidian and Ptolemaic models of the solar system is losing currency with no immediately apparent effect in the wider (American) culture of prideful ignorance.

Here’s the rub: even though I believe more nearly the opposite, namely, that refined understandings of the universe developed and held in the minds of a relative few and never achieving the completeness of a union theory yet sufficient to bestow upon us hubris a model for action in the world are eventually (or ultimately) embedded in the deep culture, I found it difficult to argue that point to us fish inside the fishbowl. Indeed, the fellow across the table from me, who possessed far greater scientific wherewithal than do I, could only rebut my assertions with the baldest “is not, is too” type of negation.

I attempted an exploration of a deep-culture effect more than two years ago with this post, but I fear the whole subject was too arcane to make much of an impression. General readers simply do not go in for such analysis. Yet I still believe that the effect is present in, for example, our willingness to trash the world until it’s uninhabitable — at least for us — and our earnest desire to transcend our embodiment and be something other than human (e.g., Transhumanism), which is an expression of our deep self-loathing and self-destructive impulse (explored amply by The Compulsive Explainer — see blogroll). Like my dinner table conversation, these inquiries lead nowhere in particular but are nonetheless fascinating. Perhaps a generous reader out there who can point to a better example that is more accessible and convincing than what I’ve been able to muster.

Every blog post I write suffers from the same basic problem: drawing disparate ideas together in succinct, cogent form that expresses enough of the thesis to make sense while leaving room for commentary, discussion, and development. Alas, commentary and discussion are nearly nonexistent, but that’s always been my expectation and experience given my subjects. When expanding a blog into several parts, the greatest risk is that ideas fail to coalesce legibly, compounded by the unlikelihood that readers who happen to navigate here will bother to read all the parts. (I suspect this is due in part to most readers’ inability to comprehend complex, multipart writing, as discussed in this blog post by Ugo Bardi describing surprising levels of functional illiteracy.) So this addendum to my three-part blog on Dissolving Reality is doomed, like the rest of my blog, to go unread and ignored. Plus ça change

Have you had the experience of buying a new model of vehicle and suddenly noticed other vehicles of the same model on the road? That’s what I’ve been noticing since I hatched my thesis (noting with habitual resignation that there nothing is new under the sun), which is that the debased information environment now admits multiple interpretations of reality, none of which can lay exclusive claim to authority as an accurate account. Reality has instead dissolved into a stew of competing arguments, often extremely politicized, which typically appeal to emotion. Historically, the principal conflict was between different ways of knowing exemplified by faith and reason, perhaps better understood as the church (in the West, the Catholic Church) vs. science. Floodgates have now opened to any wild interpretation one might concoct, all of which coexist on roughly equal footing in the marketplace of ideas. (more…)

At last, getting to my much, much delayed final book blogs (three parts) on Iain McGilchrist’s The Master and His Emissary. The book came out in 2010, I picked it up in 2012 (as memory serves), and it took me nearly two years to read its entirety, during which time I blogged my observations. I knew at the time of my previous post on the book that there would be more to say, and it’s taken considerable time to get back to it.

McGilchrist ends with a withering criticism of the Modern and Postmodern (PoMo) Eras, which I characterized as an account of how the world went mad. That still seems accurate to me: the madness that overtook us in the Modern Era led to world wars, genocides, and systematic reduction of humanity to mere material and mechanism, what Ortega y Gasset called Mass Man. Reduction of the rest of the living world to resources to be harvested and exploited by us is a worldview often called instrumental reality. From my armchair, I sense that our societal madness has shape-shifted a few times since the fin de siècle 1880s and 90s. Let’s start with quotes from McGilchrist before I extend into my own analysis. Here is one of his many descriptions of the left-hemisphere paradigm under which we now operate:

In his book on the subject, Modernity and Self-identity, Anthony Giddens describes the characteristic disruption of space and time required by globalisation, itself the necessary consequence of industrial capitalism, which destroys the sense of belonging, and ultimately of individual identity. He refers to what he calls ‘disembedding mechanisms’, the effect of which is to separate things from their context, and ourselves from the uniqueness of place, what he calls ‘locale’. Real things and experiences are replaced by symbolic tokens; ‘expert’ systems replace local know-how and skill with a centralised process dependent on rules. He sees a dangerous form of positive feedback, whereby theoretical positions, once promulgated, dictate the reality that comes about, since they are then fed back to us through the media, which form, as much as reflect, reality. The media also promote fragmentation by a random juxtaposition of items of information, as well as permitting the ‘intrusion of distant events into everyday consciousness’, another aspect of decontextualisation in modern life adding to loss of meaning in the experienced world. [p. 390]

Reliance on abstract, decontextualized tokens having only figurative, nonintrinsic power and meaning is a specific sort of distancing, isolation, and reduction that describes much of modern life and shares many characteristics with schizophrenia, as McGilchrist points out throughout the chapter. That was the first shape-shift of our madness: full-blown mechanization borne out of reductionism and materialism, perspectives bequeathed to us by science. The slow process had been underway since the invention of the mechanical clock and discovery of heliocentrism, but it gained steam (pun intended) as the Industrial Revolution matured in the late 19th century.

The PoMo Era is recognized as having begun just after the middle of the 20th century, though its attributes are questionably defined or understood. That said, the most damning criticism leveled at PoMo is its hall-of-mirrors effect that renders objects in the mirrors meaningless because the original reference point is obscured or lost. McGilchrist also refers repeatedly to loss of meaning resulting from the ironizing effect of left-brain dominance. The corresponding academic fad was PoMo literary criticism (deconstruction) in the 1970s, but it had antecedents in quantum theory. Here is McGilchrist on PoMo:

With post-modernism, meaning drains away. Art becomes a game in which the emptiness of a wholly insubstantial world, in which there is nothing beyond the set of terms we have in vain used to ‘construct’ mean, is allowed to speak for its own vacuity. The set of terms are now seen simply to refer to themselves. They have lost transparency; and all conditions that would yield meaning have been ironized out of existence. [pp. 422–423]

This was the second shape-shift: loss of meaning in the middle of the 20th century as purely theoretical formulations, which is to say, abstraction, gained adherents. He goes on:

Over-awareness … alienates us from the world and leads to a belief that only we, or our thought processes, are real … The detached, unmoving, unmoved observer feels that the world loses reality, becomes merely ‘things seen’. Attention is focussed on the field of consciousness itself, not on the world beyond, and we seem to experience experience … [In hyperconsciousness, elements] of the self and of experience which normally remain, and need to remain, intuitive, unconscious, become the objects of a detached, alienating attention, the levels of consciousness multiply, so that there is an awareness of one’s own awareness, and so on. The result of this is a sort of paralysis, in which even everyday ‘automatic’ actions such as moving one leg in front of another in order to walk can become problematic … The effect of hyperconsciousness is to produce a flight from the body and from its attendant emotions. [pp. 394–396]

Having devoted a fair amount of my intellectual life to trying to understand consciousness, I immediately recognized the discussion of hyperconsciousness (derived from Louis Sass) as what I often call recursion error, where consciousness becomes the object of its own contemplation, with obvious consequences. Modern, first-world people all suffer from this effect to varying degrees because that is how modern consciousness is warped shaped.

I believe we can observe now two more characteristic extensions or variations of our madness, probably overlapping, not discrete, following closely on each other: the Ironic and Post-Ironic. The characteristics are these:

  • Modern — reductive, mechanistic, instrumental interpretation of reality
  • Postmodern — self-referential (recursive) and meaningless reality
  • Ironic — reversed reality
  • Post-Ironic — multiplicity of competing meanings/narratives, multiple realities

All this is quite enough to the chew on for a start. I plan to continue in pts. 2 and 3 with description of the Ironic and Post-Ironic.