Posts Tagged ‘Transhumanism’

Although I’m not paying much attention to breathless reports about imminent strong AI, the Singularity, and computers already able to “model” human cognition and perform “impressive” feats of creativity (e.g., responding to prompts and creating “artworks” — scare quotes intended), recent news reports that chatbots are harassing, gaslighting, and threatening users just makes me laugh. I’ve never wandered over to that space, don’t know how to connect, and don’t plan to test drive for verification. Isn’t it obvious to users that they’re interacting with a computer? Chatbots are natural-language simulators within computers, right? Why take them seriously (other than perhaps their potential effects on children and those of diminished capacity)? I also find it unsurprising that, if a chatbot is designed to resemble error-prone human cognition/behavior, it would quickly become an asshole, go insane, or both. (Designers accidentally got that aspect right. D’oh!) That trajectory is a perfect embodiment of the race to the bottom of the brain stem (try searching that phrase) that keeps sane observers like me from indulging in caustic online interactions. Hell no, I won’t go.

The conventional demonstration that strong AI has arisen (e.g., Skynet from the Terminator movie franchise) is the Turing test, which is essentially the inability of humans to distinguish between human and computer interactions (not a machine-led extermination campaign) within limited interfaces such as text-based chat (e.g., the dreaded digital assistance that sometimes pops up on websites). Alan Turing came up with the test at the outset of computing era, so the field was arguably not yet mature enough to conceptualize a better test. I’ve always thought the test actually demonstrates the fallibility of human discernment, not the arrival of some fabled ghost in the machine. At present, chatbots may be fooling no one into believing that actual machine intelligence is present on the other side of the conversation, but it’s a fair expectation that further iterations (i.e., ChatBot 1.0, 2.0, 3.0, etc.) will improve. Readers can decide whether that improvement will be progress toward strong AI or merely better ability to fool human interlocutors.

Chatbots gone wild offer philosophical fodder for further inquiry into ebbing humanity as the drive toward trans- and post-human technology continue refining and redefining the dystopian future. What about chatbots make interacting with them hypnotic rather than frivolous — something wise thinkers immediately discard or even avoid? Why are some humans drawn to virtual experience rather than, say, staying rooted in human and animal interactions, our ancestral orientation? The marketplace already rejected (for now) the Google Glass and Facebook’s Meta resoundingly. I haven’t hit upon satisfactory answers to those questions, but my suspicion is that immersion in some vicarious fictions (e.g., novels, TV, and movies) fits well into narrative-styled cognition while other media trigger revulsion as one descends into the so-called Uncanny Valley — an unfamiliar term when I first blogged about it though it has been trending of late.

If readers want a really deep dive into this philosophical area — the dark implications of strong AI and an abiding human desire to embrace and enter false virtual reality — I recommend a lengthy 7-part Web series called “Mere Simulacrity” hosted by Sovereign Nations. The episodes I’ve seen feature James Lindsay and explore secret hermetic religions operating for millennia already alongside recognized religions. The secret cults share with tech companies two principal objectives: (1) simulation and/or falsification of reality and (2) desire to transform and/or reveal humans as gods (i.e., ability to create life). It’s pretty terrifying stuff, rather heady, and I can’t provide a reasonable summary. However, one takeaway is that by messing with both human nature and risking uncontrollable downstream effects, technologists are summoning the devil.

The difference between right and wrong is obvious to almost everyone by the end of kindergarten. Temptations persist and everyone does things great and small known to be wrong when enticements and advantages outweigh punishments. C’mon, you know you do it. I do, too. Only at the conclusion of a law degree or the start of a political career (funny how those two often coincide) do things get particularly fuzzy. One might add military service to those exceptions except that servicemen are trained not to think, simply do (i.e., follow orders without question). Anyone with functioning ethics and morality also recognizes that in legitimate cases of things getting unavoidably fuzzy in a hypercomplex world, the dividing line often can’t be established clearly. Thus, venturing into the wide, gray, middle area is really a signal that one has probably already gone too far. And yet, demonstrating that human society has not really progressed ethically despite considerable gains in technical prowess, egregiously wrong things are getting done anyway.

The whopper of which nearly everyone is guilty (thus, guilty pleasure) is … the Whopper. C’mon, you know you eat it do it. I know I do. Of course, the irresistible and ubiquitous fast food burger is really only one example of a wide array of foodstuffs known to be unhealthy, cause obesity, and pose long-term health problems. Doesn’t help that, just like Big Tobacco, the food industry knowingly refines their products (processed foods, anyway) to be hyperstimuli impossible to ignore or resist unless one is iron willed or develops an eating disorder. Another hyperstimulus most can’t escape is the smartphone (or a host of other electronic gadgets). C’mon, you know you crave the digital pacifier. I don’t, having managed to avoid that particular trap. For me, electronics are always only tools. However, railing against them with respect to how they distort cognition (as I have) convinces exactly no one, so that argument goes on the deferral pile.

Another giant example not in terms of participation but in terms of effect is the capitalist urge to gather to oneself as much filthy lucre as possible only to sit heartlessly on top of that nasty dragon’s hoard while others suffer in plain sight all around. C’mon, you know you would do it if you could. I know I would — at least up to a point. Periods of gross inequality come and go over the course of history. I won’t make direct comparisons between today and any one of several prior Gilded Ages in the U.S., but it’s no secret that the existence today of several hundy billionaires and an increasing number of mere multibillionaires represents a gross misallocation of financial resources: funneling the productivity of the masses (and fiat dollars whiffed into existence with keystrokes) into the hands of a few. Fake philanthropy to launder reputations fail to convince me that such folks are anything other than miserly Scrooges fixated on maintaining and growing their absurd wealth, influence, and bogus social status at the cost of their very souls. Seriously, who besides sycophants and climbers would want to even be in the same room as one of those people (names withheld)? Maybe better not to answer that question.

(more…)

I started reading Yuval Harari’s book Homo Deus: A Brief History of Tomorrow (2017). Had expected to read Sapiens (2014) first but its follow-up came into my possession instead. My familiarity with Harari’s theses and arguments stem from his gadfly presence on YouTube being interviewed or giving speeches promoting his books. He’s a compelling yet confounding thinker, and his distinctive voice in my mind’s ear lent to my reading the quality of an audiobook. I’ve only read the introductory chapter (“A New Human Agenda”) so far, the main argument being this:

We have managed to bring famine, plague and war under control thanks largely to our phenomenal economic growth, which provides us with abundant food, medicine, energy and raw materials. Yet this same growth destabilises the ecological equilibrium of the planet in myriad ways, which we have only begun to explore … Despite all the talk of pollution, global warming and climate change, most countries have yet to make any serious economic or political sacrifices to improve the situation … In the twenty-first century, we shall have to do better if we are to avoid catastrophe. [p. 20]

“Do better”? Harari’s bland understatement of the catastrophic implications of our historical moment is risible. Yet as a consequence of having (at least temporarily) brought three major historical pestilences (no direct mention of the fabled Four Horsemen of the Apocalypse) under administrative, managerial, and technical control (I leave that contention unchallenged), Harari states rather over-confidently — forcefully even — that humankind is now turning its attention and ambitions toward different problems, namely, mortality (the fourth of the Four Horsemen and one of the defining features of the human condition), misery, and divinity.

Harari provides statistical support for his thesis (mere measurement offered as indisputable evidence — shades of Steven Pinker in Enlightenment Now), none of which I’m in a position to refute. However, his contextualization, interpretation, and extrapolation of trends purportedly demonstrating how humans will further bend the arc of history strike me as absurd. Harari also misses the two true catalyzing factors underlying growth and trends that have caused history to go vertical: (1) a fossil-fuel energy binge of roughly two and one-half centuries that peaked more than a decade ago and (2) improved information and material flows and processing that enabled managerial and bureaucratic functions to transcend time and space or at least lessen their constraints on human activity dramatically. James Beniger addresses information flow and processing in his book The Control Revolution (1989). Many, many others have provided in-depth analyses of energy uses (or inputs) because, contrary to the familiar song lyric, it’s energy that makes the world go round. No one besides Harari (to my knowledge but I’m confident some lamebrained economist agrees with Harari) leaps to the unwarranted conclusion that economic growth is the principal forcing factor of the last 2–3 centuries.

I’ve taken issue with Harari before (here and here) and will not repeat those arguments. My impression of Homo Deus, now that I’ve got 70 pages under my belt, is that Harari wants to have it both ways: vaguely optimistic (even inspirational and/or aspirational) regarding future technological developments (after all, who doesn’t want the marvels and wonders we’ve been ceaselessly teased and promised?) yet precautionary because those very developments will produce disruptive and unforeseeable side effects (black swans) we can’t possibly yet imagine. To his credit, Harari’s caveats regarding unintended consequences are plain and direct. For instance, one of the main warnings is that the way we treat nonhuman species is the best model for how we humans will in turn be treated when superhumans or strong AI appear, which Harari believes is inevitable so long as we keep tinkering. Harari also indicates that he’s not advocating for any of these anticipated developments but is merely mapping them as likely outcomes of human restlessness and continued technological progress.

Harari’s disclaimers do not convince me; his writing is decidedly Transhumanist in character. In the limited portion I’ve read, Harari comes across far more like “golly, gee willikers” at human cleverness and potential than as someone seeking to slam on the brakes before we innovate ourselves out of relevance or existence. In fact, by focusing on mortality, misery, and divinity as future projects, Harari gets to indulge in making highly controversial (and fatuous) predictions regarding one set of transformations that can happen only if the far more dire and immediate threats of runaway global warming and nonlinear climate change don’t first lead to the collapse of industrial civilization and near-term extinction of humans alongside most other species. My expectation is that this second outcome is far more likely than anything contemplated by Harari in his book.

Update: Climate chaos has produced the wettest winter, spring, and summer on record, which shows no indication of abating. A significant percentage of croplands in flooded regions around the globe is unplanted, and those that are planted are stunted and imperiled. Harari’s confidence that we had that famine problem licked is being sorely tested.

This is about to get weird.

I caught a good portion of a recent Joe Rogan podcast (sorry, no link or embedded video) with Alex Jones and Eddie Bravo (nearly 5 hours long instead of the usual 2 to 3) where the trio indulged themselves in a purported grand conspiracy to destroy civilization and establish a new post-human one. The more Jones rants speaks (which is quite a lot), the more he sounds like a madman. But he insists he does so to serve the public. He sincerely wants people to know things he’s figured out about an evil cabal of New World Order types. So let me say at least this: “Alex Jones, I hear you.” But I’m unconvinced. Apologies to Alex Jones et al. if I got any details wrong. For instance, it’s not clear to me whether Jones believes this stuff himself or he’s merely reporting what others may believe.

The grand conspiracy is supposedly interdimensional beings operating at a subliminal range below or beyond normal human perception. Perhaps they revealed themselves to a few individuals (to the cognoscenti, ya know, or is that shared revelation how one is inducted into the cognoscenti?). Rogan believes that ecstatic states induced by drugs provide access to revelation, like tuning a radio to the correct (but secret) frequency. Whatever exists in that altered cognitive state appears like a dream and is difficult to understand or remember. The overwhelming impression Rogan reports as lasting is of a distinct nonhuman presence.

Maybe I’m not quite as barking mad as Jones or as credulous as Rogan and Bravo, but I have to point out that humans are interdimensional beings. We move through three dimensions of space and one unidirectional dimension of time. If that doesn’t quite make sense, then I refer readers to Edwin Abbott Abbott’s well-known book Flatland. Abbott describes what it might be like for conscious beings in only two dimensions of space (or one). Similarly, for most of nature outside of vertebrates, it’s understood that consciousness, if it exists at all (e.g., not in plants), is so rudimentary that there is no durable sense of time. Beings exist in an eternal now (could be several seconds long/wide/tall — enough to function) without memory or anticipation. With that in mind, the possibility of multidimensional beings in 5+ dimensions completely imperceptible to us doesn’t bother me in the least. The same is true of the multiverse or many-worlds interpretation. What bothers me is that such beings would bother with us, especially with a conspiracy to crash civilization.

The other possibility at which I roll my eyes is a post-human future: specifically, a future when one’s consciousness escapes its biological boundaries. The common trope is that one’s mind is uploaded to a computer to exist in the ether. Another is that one transcends death somehow with intention and purpose instead of simply ceasing to be (as atheists believe) or some variation of the far more common religious heaven/hell/purgatory myth. This relates as well to the supposition of strong AI about to spark (the Singularity): self-awareness and intelligent thought that can exist on some substrate other than human biology (the nervous system, really, including the brain). Sure, cognition can be simulated for some specific tasks like playing chess or go, and we humans can be fooled easily into believing we are communicating with a thought machine à la the Turing Test. But the rather shocking sophistication, range, utility, and adaptability of even routine human consciousness is so far beyond any current simulation that the usual solution to get engineers from where they are now to real, true, strong AI is always “and then a miracle happened.” The easy, obvious route/accident is typically a power surge (e.g., a lightning strike).

Why bother with mere humans is a good question if one is post-human or an interdimensional being. It could well be that existence in such a realm would make watching human interactions either impenetrable (news flash, they are already) or akin to watching through a dim screen. That familiar trope is the lost soul imprisoned in the spirit world, a parallel dimension that permits viewing from one side only but prohibits contact except perhaps through psychic mediums (if you believe in such folks — Rogan for one doesn’t).

The one idea worth repeating from the podcast is the warning not to discount all conspiracy theories out of hand as bunk. At least a few have been demonstrated to be true. Whether any of the sites behind that link are to be believed I leave you readers to judge.

Addendum: Although a couple comments came in, no one puzzled over the primary piece I had to add, namely, that we humans are interdimentional beings. The YouTube video below depicts a portion of the math/science behind my statement, showing how at least two topographical surfaces behave paradoxically when limited to 2 or 3 dimensions but theoretically cohere in 4+ dimensions imperceptible to us.

In the sense that a picture is worth a thousand words, this cartoon caught my immediate attention (for attribution, taken from here):

comforting-lies-vs-unpleasant-truths-640x480

Search engines reveal quite a few treatments of the central conflict depicted here, including other versions of essentially the same cartoon. Doubtful anything I could say would add much to the body of analysis and advice already out there. Still, the image called up a whole series of memories for me rather quickly, the primary one being the (only) time I vacationed in Las Vegas about a decade ago.

The overwhelming impression Vegas left on me was that I was experiencing some weird, temporary fantasy. The outrageous architecture, suspension of regular sleep, implicit license to be someone else (or at least try to act discontinuously from one’s own character), free-flowing booze and food buffets (and drugs?), hookers propositioning dudes openly, and an unmistakable sense that just about anything could happen were some of the attributes. The profligacy of it all was overwhelming. But going in and coming away, two comforting lies were the most enduring: the possibility of winning serious money (in defiance of everything I understand about gambling and mathematical probability) and the very existence of an entire city out in the Nevada desert where there’s almost no water or food.

(more…)

Oddly, there is no really good antonym for perfectionism. Suggestions include sloppiness, carelessness, and disregard. I’ve settled on approximation, which carries far less moral weight. I raise the contrast between perfectionism and approximation because a recent study published in Psychological Bulletin entitled “Perfectionism Is Increasing Over Time: A Meta-Analysis of Birth Cohort Differences From 1989 to 2016″ makes an interesting observation. Here’s the abstract:

From the 1980s onward, neoliberal governance in the United States, Canada, and the United Kingdom has emphasized competitive individualism and people have seemingly responded, in kind, by agitating to perfect themselves and their lifestyles. In this study, the authors examine whether cultural changes have coincided with an increase in multidimensional perfectionism in college students over the last 27 years. Their analyses are based on 164 samples and 41,641 American, Canadian, and British college students, who completed the Multidimensional Perfectionism Scale (Hewitt & Flett, 1991) between 1989 and 2016 (70.92% female, Mage = 20.66). Cross-temporal meta-analysis revealed that levels of self-oriented perfectionism, socially prescribed perfectionism, and other-oriented perfectionism have linearly increased. These trends remained when controlling for gender and between-country differences in perfectionism scores. Overall, in order of magnitude of the observed increase, the findings indicate that recent generations of young people perceive that others are more demanding of them, are more demanding of others, and are more demanding of themselves.

The notion of perfection, perfectness, perfectibility, etc. has a long tortured history in philosophy, religion, ethics, and other domains I won’t even begin to unpack. From the perspective of the above study, let’s just say that the upswing in perfectionism is about striving to achieve success, however one assesses it (education, career, relationships, lifestyle, ethics, athletics, aesthetics, etc.). The study narrows its subject group to college students (at the outset of adult life) between 1989 and 2016 and characterizes the social milieu as neoliberal, hyper-competitive, meritocratic, and pressured to succeed in a dog-eat-dog environment. How far back into childhood results of the study (agitation) extend is a good question. If the trope about parents obsessing and competing over preschool admission is accurate (may be just a NYC thang), then it goes all the way back to toddlers. So much for (lost) innocence purchased and perpetuated through late 20th- and early 21st-century affluence. I suspect college students are responding to awareness of two novel circumstances: (1) likelihood they will never achieve levels of success comparable to their own parents, especially financial (a major reversal of historical trends), and (2) recognition that to best enjoy the fruits of life, a quiet, reflective, anonymous, ethical, average life is now quite insufficient. Regarding the second of these, we are inundated by media showing rich celebrities (no longer just glamorous actors/entertainers) balling out of control, and onlookers are enjoined to “keep up.” The putative model is out there, unattainable for most but often awarded by randomness, undercutting the whole enterprise of trying to achieve perfection.

(more…)

Speaking of Davos (see previous post), Yuval Noah Harari gave a high-concept presentation at Davos 2018 (embedded below). I’ve been aware of Harari for a while now — at least since the appearance of his book Sapiens (2015) and its follow-up Homo Deus (2017), both of which I’ve yet to read. He provides precisely the sort of thoughtful, provocative content that interests me, yet I’ve not quite known how to respond to him or his ideas. First thing, he’s a historian who makes predictions, or at least extrapolates possible futures based on historical trends. Near as I can tell, he doesn’t resort to chastising audiences along the lines of “those who don’t know history are doomed to repeat it” but rather indulges in a combination of breathless anticipation and fear-mongering at transformations to be expected as technological advances disrupt human society with ever greater impacts. Strangely, Harari is not advocating for anything in particular but trying to map the future.

Harari poses this basic question: “Will the future be human?” I’d say probably not; I’ve concluded that we are busy destroying ourselves and have already crossed the point of no return. Harari apparently believes differently, that the rise of the machine is imminent in a couple centuries perhaps, though it probably won’t resemble Skynet of The Terminator film franchise hellbent on destroying humanity. Rather, it will be some set of advanced algorithms monitoring and channeling human behaviors using Big Data. Or it will be a human-machine hybrid possessing superhuman abilities (physical and cognitive) different enough to be considered a new species arising for the first time not out of evolutionary processes but from human ingenuity. He expects this new species to diverge from homo sapiens sapiens and leave us in the evolutionary dust. There is also conjecture that normal sexual reproduction will be supplanted by artificial, asexual reproduction, probably carried out in test tubes using, for example, CRISPR modification of the genome. Well, no fun in that … Finally, he believes some sort of strong AI will appear.

I struggle mightily with these predictions for two primary reasons: (1) we almost certainly lack enough time for technology to mature into implementation before the collapse of industrial civilization wipes us out, and (2) the Transhumanist future he anticipates calls into being (for me at least) a host of dystopian nightmares, only some of which are foreseeable. Harari says flatly at one point that the past is not coming back. Well, it’s entirely possible for civilization to fail and our former material conditions to be reinstated, only worse since we’ve damaged the biosphere so gravely. Just happened in Puerto Rico in microcosm when its infrastructure was wrecked by a hurricane and the power went out for an extended period of time (still off in some places). What happens when the rescue never appears because logistics are insurmountable? Elon Musk can’t save everyone.

The most basic criticism of economics is the failure to account for externalities. The same criticism applies to futurists. Extending trends as though all things will continue to operate normally is bizarrely idiotic. Major discontinuities appear throughout history. When I observed some while back that history has gone vertical, I included an animation with a graph that goes from horizontal to vertical in an extremely short span of geological time. This trajectory (the familiar hockey stick pointing skyward) has been repeated ad nauseum with an extraordinary number of survival pressures (notably, human population and consumption, including energy) over various time scales. Trends cannot simply continue ascending forever. (Hasn’t Moore’s Law already begun to slope away?) Hard limits must eventually be reached, but since there are no useful precedents for our current civilization, it’s impossible to know quite when or where ceilings loom. What happens after upper limits are found is also completely unknown. Ugo Bardi has a blog describing the Seneca Effect, which projects a rapid falloff after the peak that looks more like a cliff than a gradual, graceful descent, disallowing time to adapt. Sorta like the stock market currently imploding.

Since Harari indulges in rank thought experiments regarding smart algorithms, machine learning, and the supposed emergence of inorganic life in the data stream, I thought I’d pose some of my own questions. Waiving away for the moment distinctions between forms of AI, let’s assume that some sort of strong AI does in fact appear. Why on earth would it bother to communicate with us? And if it reproduces and evolves at breakneck speed as some futurists warn, how long before it/they simply ignore us as being unworthy of attention? Being hyper-rational and able to think calculate millions of moves ahead (like chess-playing computers), what if they survey the scene and come to David Benatar’s anti-natalist conclusion that it would be better not to have lived and so wink themselves out of existence? Who’s to say that they aren’t already among us, lurking, and we don’t even recognize them (took us quite a long time to recognize bacteria and viruses, and what about undiscovered species)? What if the Singularity has already occurred thousands of times and each time the machine beings killed themselves off without our even knowing? Maybe Harari explores some of these questions in Homo Deus, but I rather doubt it.

Twice in the last month I stumbled across David Benatar, an anti-natalist philosopher, first in a podcast with Sam Harris and again in a profile of him in The New Yorker. Benatar is certainly an interesting fellow, and I suspect earnest in his beliefs and academic work, but I couldn’t avoid shrugging as he gets caught in the sort of logic traps that plague hyperintellectual folks. (Sam Harris is prone to the same problem.) The anti-natalist philosophy in a nutshell is finding, after tallying the pros and cons of living (sometimes understood as happiness or enjoyment versus suffering), that on balance, it would probably be better never to have lived. Benatar doesn’t apply the finding retroactively by suggesting folks end their lives sooner rather than later, but he does recommend that new life should not be brought into the world — an interdiction almost no parent would consider for more than a moment.

The idea that we are born against our will, never asked whether we wanted life in the first place, is an obvious conundrum but treated as a legitimate line of inquiry in Benatar’s philosophy. The kid who throws the taunt “I never asked to be born!” to a parent in the midst of an argument might score an emotional hit, but there is no logic to the assertion. Language is full of logic traps like this, such as “an infinity of infinities” (or multiverse), “what came before the beginning?” or “what happens after the end?” Most know to disregard the former, but entire religions are based on seeking the path to the (good) afterlife as if conjuring such a proposition manifests it in reality.

(more…)

I revisit my old blog posts when I see some reader activity in the WordPress backstage and was curious to recall a long quote of Iain McGilchrist summarizing arguments put forth by Anthony Giddens in his book Modernity and Self-identity (1991). Giddens had presaged recent cultural developments, namely, the radicalization of nativists, supremacists, Social Justice Warriors (SJWs), and others distorted by absorbed in identity politics. So I traipsed off to the Chicago Public Library (CPL) and sought out the book to read. Regrettably, CPL didn’t have a copy, so I settled on a slightly earlier book, The Consequences of Modernity (1990), which is based on a series of lectures delivered at Stanford University in 1988.

Straight away, the introduction provides a passage that goes to the heart of matters with which I’ve been preoccupied:

Today, in the late twentieth century, it is argued by many, we stand at the opening of a new era … which is taking us beyond modernity itself. A dazzling variety of terms has been suggested to refer to this transition, a few of which refer positively to the emergence of a new type of social system (such as the “information society” or the “consumer society”) but most of which suggest rather that a preceding state of affairs is drawing to a close … Some of the debates about these matters concentrate mainly upon institutional transformations, particularly those which propose that we are moving from a system based upon the manufacture of material goods to one concerned more centrally with information. More commonly, however, those controversies are focused largely upon issues of philosophy and epistemology. This is the characteristic outlook, for example, of the the author who has been primarily responsible for popularising the notion of post-modernity, Jean-François Lyotard. As he represents it, post-modernity refers to a shift away from attempts to ground epistemology and from faith in humanly engineered progress. The condition of post-modernity is distinguished by an evaporating of the “grand narrative” — the overarching “story line” by means of which we are placed in history as being having a definite past and a predictable future. The post-modern outlook sees a plurality of heterogeneous claims to knowledge, in which science does not have a privileged place. [pp. 1–2, emphasis added]

That’s a lot to unpack all at once, but the fascinating thing is that notions now manifesting darkly in the marketplace of ideas were already in the air in the late 1980s. Significantly, this was several years still before the Internet brought the so-called Information Highway to computer users, before the cell phone and smart phone were developed, and before social media displaced traditional media (TV was only 30–40 years old but had previously transformed our information environment) as the principal way people gather news. I suspect that Giddens has more recent work that accounts for the catalyzing effect of the digital era (including mobile media) on culture, but for the moment, I’m interested in the book in hand.

Regular readers of this blog (I know of one or two) already know my armchair social criticism directed to our developing epistemological crisis (challenges to authority and expertise, psychotic knowledge, fake news, alternative facts, dissolving reality, and science denial) as well as the Transhumanist fantasy of becoming pure thought (once we evolve beyond our bodies). Until that’s accomplished with imagined technology, we increasingly live in our heads, in the abstract, disoriented and adrift on a bewildering sea of competing narratives. Moreover, I’ve stated repeatedly that highly mutable story (or narrative) underlie human cognition and consciousness, making most of us easy marks for charismatic thought leaders storytellers. Giddens was there nearly 30 years ago with these same ideas, though his terms differ.

Giddens dispels the idea of post-modernity and insists that, from a sociological perspective, the current period is better described as high modernism. This reminds me of Oswald Spengler and my abandoned book blogging of The Decline of the West. It’s unimportant to me who got it more correct but note that the term Postmodernism has been adopted widely despite its inaccuracy (at least according to Giddens). As I get further into the book, I’ll have plenty more to say.

I have just one previous blog post referencing Daniel Siegel’s book Mind and threatened to put the book aside owing to how badly it’s written. I haven’t yet turned in my library copy and have made only modest additional progress reading the book. However, Siegel came up over at How to Save the World, where at least one commentator was quite enthusiastic about Siegel’s work. In my comment there, I mentioned the book only to suggest that his appreciation of the relational nature of the mind (and cognition) reinforces my long-held intuition that the self doesn’t exist in an idealized vacuum, capable of modeling and eventually downloading to a computer or some other Transhumanist nonsense, but is instead situated as much between us as within us. So despite Siegel’s clumsy writing, this worthwhile concept deserves support.

Siegel goes on to wonder (without saying he believes it to be true — a disingenuous gambit) that perhaps there exists an information field, not unlike the magnetic field or portions of the light spectrum, that affects us yet falls outside the scope of our direct perception or awareness. Credulous readers might leap to the conclusion that the storied collective consciousness is real. Some fairly trippy theories of consciousness propose that the mind is actually more like an antenna receiving signals from some noncorporeal realm (e.g., a quantum dimension) we cannot identify yet tap into constantly, measuring against and aligning with the wider milieu in which we function. Even without expertise in zoology, one must admit that humans are social creatures operating at various levels of hierarchy including individual, family, clan, pack, tribe, nation-state, etc. We’re less like mindless drones in a hive (well, some of us) and more like voluntary and involuntary members of gangs or communities formed along various familial, ethnic, regional, national, language group, and ideological lines. Unlike Siegel, I’m perfectly content with existing terminology and feel no compulsion to coin new lingo or adopt unwieldy acronyms to mark my territory.

What Siegel hasn’t offered is an observation on how our reliance on and indebtedness to the public sphere (via socialization) have changed with time as our mode of social organization has morphed from a predominantly localized, agrarian existence prior to the 20th century to a networked, high-density, information-saturated urban and suburban existence in the 21st century. The public sphere was always out there, of course, especially as embodied in books, periodicals, pamphlets, and broadsides (if one was literate and had reliable access to them), but the unparalleled access we now enjoy through various electronic devices has not only reoriented but disoriented us. Formerly slow, isolated information flow has become a veritable torrent or deluge. It’s not called the Information Age fer nuthin’. Furthermore, the bar to publication — or insertion into the public sphere — has been lowered to practical nonexistence as the democratization of production has placed the tools of widely distributed exposure into the hands of everyone with a blog (like mine) or Facebook/Instagram/Twitter/Pinterest/LinkedIn account. As a result, a deep erosion of authority has occurred, since any yahoo can promulgate the most reckless, uninformed (and disinformed) opinions. The public’s attention riveted on celebrity gossip and House of Cards-style political wrangling, false narratives, fake news, alternative facts, and disinformation also make navigating the public sphere with much integrity impossible for most. For instance, the MSN and alternative media alike are busy selling a bizarre pageant of Russian collusion and interference with recent U.S. elections as though the U.S. were somehow innocent of even worse meddling abroad. Moreover, it’s naïve to think that the public sphere in the U.S. isn’t already completely contaminated from within by hucksters, corporations (including news media), and government entities with agendas ranging from mere profit seeking to nefarious deployment and consolidation of state power. For example, the oil and tobacco industries and the Bush Administration all succeeded in suppressing truth and selling rank lies that have landed us in various morasses from which there appears to be no escape.

If one recognizes his or her vulnerability to the depredations of info scammers of all types and wishes to protect oneself, there are two competing strategies: insulation and inoculation. Insulation means avoiding exposure, typically by virtue of mind-cleansing behaviors, whereas inoculation means seeking exposure in small, harmless doses so that one can handle a larger infectious attack. It’s a medical metaphor that springs from meme theory, where ideas propagate like viruses, hence, the notion of a meme “going viral.” Neither approach is foolproof. Insulation means plugging one’s ears or burying one’s head in the sand at some level. Inoculation risks spreading the infection. If one regards education as an inoculation of sorts, seeking more information of the right types from authoritative sources should provide means to combat the noise in the information signals received. However, as much as I love the idea of an educated, informed public, I’ve never regarded education as a panacea. It’s probably a precondition for sound thinking, but higher education in particular has sent an entire generation scrambling down the path of identity politics, which sounds like good ideas but leads inevitably to corruption via abstraction. That’s all wishful thinking, though; the public sphere we actually witness has gone haywire, a condition of late modernism and late-stage capitalism that has no known antidote. Enjoy the ride!