Posts Tagged ‘Futurism’

From time to time, I admit that I’m in no position to referee disputes, usually out of my lack of technical expertise in the hard sciences. I also avoid the impossibility of policing the Internet, assiduously pointing out error where it occurs. Others concern themselves with correcting the record and/or reinterpreting argument with improved context and accuracy. However, once in a while, something crosses my desk that gets under my skin. An article by James Ostrowski entitled “What America Has Done To its Young People is Appalling,” published at LewRockwell.com, is such a case. It’s undoubtedly a coincidence that the most famous Rockwell is arguably Norman Rockwell, whose celebrated illustrations for the Saturday Evening Post in particular helped reinforce a charming midcentury American mythology. Lew Rockwell, OTOH, is described briefly at the website’s About blurb:

The daily news and opinion site LewRockwell.com was founded in 1999 by anarcho-capitalists Lew Rockwell … and Burt Blumert to help carry on the anti-war, anti-state, pro-market work of Murray N. Rothbard.

Those political buzzwords probably deserve some unpacking. However, that project falls outside my scope. In short, they handily foist blame for what ills us in American culture on government planning, as distinguished from the comparative freedom of libertarianism. Government earns its share of blame, no doubt, especially with its enthusiastic prosecution of war (now a forever war); but as snapshots of competing political philosophies, these buzzwords are reductive almost to the point of meaninglessness. Ostrowski lays blame more specifically on feminism and progressive big government and harkens back to an idyllic 1950s nuclear family fully consonant with Norman Rockwell’s illustrations, thus invoking the nostalgic frame.

… the idyllic norm of the 1950’s, where the mother typically stayed home to take care of the kids until they reached school age and perhaps even long afterwards, has been destroyed.  These days, in the typical American family, both parents work fulltime which means that a very large percentage of children are consigned to daycare … in the critical first five years of life, the vast majority of Americans are deprived of the obvious benefits of growing up in an intact family with the mother at home in the pre-school years. We baby boomers took this for granted. That world is gone with the wind. Why? Two main reasons: feminism and progressive big government. Feminism encouraged women to get out of the home and out from under the alleged control of husbands who allegedly controlled the family finances.

Problem is, 1950s social configurations in the U.S. were the product of a convergence of historical forces, not least of which were the end of WWII and newfound American geopolitical and economic prominence. More pointedly, an entire generation of young men and women who had deferred family life during perilous wartime were then able to marry, start families, and provide for them on a single income — typically that of the husband/father. That was the baby boom. Yet to enjoy the benefits of the era fully, one probably needed to be a WASPy middle-class male or the child of one. Women and people of color fared … differently. After all, the 1950s yielded to the sexual revolution and civil rights era one decade later, both of which aimed specifically to improve the lived experience of, well, women and people of color.

Since the 1950s were only roughly 60 years ago, it might be instructive to consider how life was another 60 years before then, or in the 1890s. If one lived in an eastern American city, life was often a Dickensian dystopia, complete with child labor, poorhouses, orphanages, asylums, and unhygienic conditions. If one lived in an agrarian setting, which was far more prevalent before the great 20th-century migration to cities, then life was frequently dirt-poor subsistence and/or pioneer homesteading requiring dawn-to-dusk labor. Neither mode yet enjoyed social planning and progressive support including, for example, sewers and other modern infrastructure, public education, and economic protections such as unionism and trust busting. Thus, 19th-century America might be characterized fairly as being closer to anarcho-capitalism than at any time since. One of its principal legacies, one must be reminded, was pretty brutal exploitation of (and violence against) labor, which can be understood by the emergence of political parties that sought to redress its worst scourges. Hindsight informs us now that reforms were slow, partial, and impermanent, leading to the observation that among all tried forms of self-governance, democratic capitalism can be characterized as perhaps the least awful.

So yeah, the U.S. came a long way from 1890 to 1950, especially in terms of standard of living, but may well be backsliding as the 21st-century middle class is hollowed out (a typical income — now termed household income — being rather challenging for a family), aspirations to rise economically above one’s parents’ level no longer function, and the culture disintegrates into tribal resentments and unrealistic fantasies about nearly everything. Ostrowski marshals a variety of demographic facts and figures to support his argument (with which I agree in large measure), but he fails to make a satisfactory causal connection with feminism and progressivism. Instead, he sounds like 45 selling his slogan Make America Great Again (MAGA), meaning let’s turn back the clock to those nostalgic 1950s happy days. Interpretations of that sentiment run in all directions from innocent to virulent (but coded). By placing blame on feminism and progressivism, it’s not difficult to hear anyone citing those putative causes as an accusation that, if only those feminists and progressives (and others) had stayed in their assigned lanes, we wouldn’t be dealing now with cultural crises that threaten to undo us. What Ostrowski fails to acknowledge is that despite all sorts of government activity over the decades, no one in the U.S. is steering the culture nearly as actively as in centrally planned economies and cultures, current and historical, which in their worst instances are fascist and/or totalitarian. One point I’ll agree on, however, just to be charitable, is that the mess we’ve made and will leave to youngsters is truly appalling.

Advertisements

Caveat: Rather uncharacteristically long for me. Kudos if you have the patience for all of this.

Caught the first season of HBO’s series Westworld on DVD. I have a boyhood memory of the original film (1973) with Yul Brynner and a dim memory of its sequel Futureworld (1976). The sheer charisma of Yul Brynner in the role of the gunslinger casts a long shadow over the new production, not that most of today’s audiences have seen the original. No doubt, 45 years of technological development in film production lends the new version some distinct advantages. Visual effects are quite stunning and Utah landscapes have never been used more appealingly in terms of cinematography. Moreover, storytelling styles have changed, though it’s difficult to argue convincingly that they’re necessarily better now than then. Competing styles only appear dated. For instance, the new series has immensely more time to develop its themes; but the ancient parables of hubris and loss of control over our own creations run amok (e.g., Shelley’s Frankenstein, or more contemporaneously, the surprisingly good new movie Upgrade) have compact, appealing narrative arcs quite different from constant teasing and foreshadowing of plot developments while actual plotting proceeds glacially. Viewers wait an awful lot longer in the HBO series for resolution of tensions and emotional payoffs, by which time investment in the story lines has been dispelled. There is also no terrifying crescendo of violence and chaos demanding rescue or resolution. HBO’s Westworld often simply plods on. To wit, a not insignificant portion of the story (um, side story) is devoted to boardroom politics (yawn) regarding who actually controls the Westworld theme park. Plot twists and reveals, while mildly interesting (typically guessed by today’s cynical audiences), do not tie the narrative together successfully.

Still, Westworld provokes considerable interest from me due to my fascination with human consciousness. The initial episode builds out the fictional future world with characters speaking exposition clearly owing its inspiration to Julian Jayne’s book The Origins of Consciousness in the Breakdown of the Bicameral Mind (another reference audiences are quite unlikely to know or recognize). I’ve had the Julian Jaynes Society’s website bookmarked for years and read the book some while back; never imagined it would be captured in modern fiction. Jaynes’ thesis (if I may be so bold as to summarize radically) is that modern consciousness coalesced around the collapse of multiple voices in the head — ideas, impulses, choices, decisions — into a single stream of consciousness perhaps better understood (probably not) as the narrative self. (Aside: the multiple voices of antiquity correspond to polytheism, whereas the modern singular voice corresponds to monotheism.) Thus, modern human consciousness arose over several millennia as the bicameral mind (the divided brain having two camera, chambers, or halves) functionally collapsed. The underlying story of the new Westworld is the emergence of machine consciousness, a/k/a strong AI, a/k/a The Singularity, while the old Westworld was about a mere software glitch. Exploration of machine consciousness modeling (e.g., improvisation builds on memory to create awareness) as a proxy for better understanding human consciousness might not be the purpose of the show, but it’s clearly implied. And although conjectural, the speed of emergence of human consciousness contrasts sharply with the abrupt ON switch regarding theorized machine consciousness. Westworld treats them as roughly equivalent, though in fairness, 35 years or so in Westworld is in fact abrupt compared to several millennia. (Indeed, the story asserts that machine consciousness sparked alive repeatedly (which I suggested here) over those 35 years but was dialed back repeatedly. Never mind all the unexplored implications.) Additionally, the fashion in which Westworld uses the term bicameral ranges from sloppy to meaningless, like the infamous technobabble of Star Trek.

(more…)

I caught the presentation embedded below with Thomas L. Friedman and Yuval Noah Harari, nominally hosted by the New York Times. It’s a very interesting discussion but not a debate. For this now standard format (two or more people sitting across from each other with a moderator and an audience), I’m pleased to observe that Friedman and Harari truly engaged each others’ ideas and behaved with admirable restraint when the other was speaking. Most of these talks are rude and combative, marred by constant interruptions and gotchas. Such bad behavior might succeed in debate club but makes for a frustratingly poor presentation. My further comments follow below.

With a topic as open-ended as The Future of Humanity, arguments and support are extremely conjectural and wildly divergent depending on the speaker’s perspective. Both speakers here admit their unique perspectives are informed by their professions, which boils down to biases borne out of methodology, and to a lesser degree perhaps, personality. Fair enough. In my estimation, Harari does a much better job adopting a pose of objectivity. Friedman comes across as both salesman and a cheerleader for human potential.

Both speakers cite a trio of threats to human civilization and wellbeing going forward. For Harari, they’re nuclear war, climate change, and technological disruption. For Friedman, they’re the market (globalization), Mother Nature (climate change alongside population growth and loss of diversity), and Moore’s Law. Friedman argues that all three are accelerating beyond control but speaks of each metaphorically, such as when refers to changes in market conditions (e.g., from independent to interdependent) as “climate change.” The biggest issue from my perspective — climate change — was largely passed over in favor of more tractable problems.

Climate change has been in the public sphere as the subject of considerable debate and confusion for at least a couple decades now. I daresay it’s virtually impossible not to be aware of the horrific scenarios surrounding what is shaping up to be the end of the world as we know it (TEOTWAWKI). Yet as a global civilization, we’ve barely reacted except with rhetoric flowing in all directions and some greenwashing. Difficult to assess, but perhaps the appearance of more articles about surviving climate change (such as this one in Bloomberg Businessweek) demonstrates that more folks recognize we can no longer stem or stop climate change from rocking the world. This blog has had lots to say about the collapse of industrial civilization being part of a mass extinction event (not aimed at but triggered by and including humans), so for these two speakers to cite but then minimize the peril we face is, well, façile at the least.

Toward the end, the moderator finally spoke up and directed the conversation towards uplift (a/k/a the happy chapter), which almost immediately resulted in posturing on the optimism/pessimism continuum with Friedman staking his position on the positive side. Curiously, Harari invalidated the question and refused to be pigeonholed on the negative side. Attempts to shoehorn discussions into familiar if inapplicable narratives or false dichotomies is commonplace. I was glad to see Harari calling bullshit on it, though others (e.g., YouTube commenters) were easily led astray.

The entire discussion is dense with ideas, most of them already quite familiar to me. I agree wholeheartedly with one of Friedman’s remarks: if something can be done, it will be done. Here, he refers to technological innovation and development. Plenty of prohibitions throughout history not to make available disruptive technologies have gone unheeded. The atomic era is the handy example (among many others) as both weaponry and power plants stemming from cracking the atom come with huge existential risks and collateral psychological effects. Yet we prance forward headlong and hurriedly, hoping to exploit profitable opportunities without concern for collateral costs. Harari’s response was to recommend caution until true cause-effect relationships can be teased out. Without saying it manifestly, Harari is citing the precautionary principle. Harari also observed that some of those effects can be displaced hundreds and thousands of years.

Displacements resulting from the Agrarian Revolution, the Scientific Revolution, and the Industrial Revolution in particular (all significant historical “turnings” in human development) are converging on the early 21st century (the part we can see at least somewhat clearly so far). Neither speaker would come straight out and condemn humanity to the dustbin of history, but at least Harari noted that Mother Nature is quite keen on extinction (which elicited a nervous? uncomfortable? ironic? laugh from the audience) and wouldn’t care if humans were left behind. For his part, Friedman admits our destructive capacity but holds fast to our cleverness and adaptability winning out in the end. And although Harari notes that the future could bring highly divergent experiences for subsets of humanity, including the creation of enhanced humans from our reckless dabbling with genetic engineering, I believe cumulative and aggregate consequences of our behavior will deposit all of us into a grim future no sane person should wish to survive.

See this post on Seven Billion Day only a few years ago as a launching point. We’re now closing in on 7.5 billion people worldwide according to the U.S. Census Bureau. At least one other counter indicates we’ve already crossed that threshold. What used to be called the population explosion or the population bomb has lost its urgency and become generically population growth. By now, application of euphemism to mask intractable problems should be familiar to everyone. I daresay few are fooled, though plenty are calmed enough to stop paying attention. If there is anything to be done to restrain ourselves from proceeding down this easily recognized path to self-destruction, I don’t know what it is. The unwillingness to accept restraints in other aspects of human behavior demonstrate pretty well that consequences be damned — especially if they’re far enough delayed in time that we get to enjoy the here and now.

Two additional links (here and here) provide abundant further information on population growth if one desired to delve more deeply into the topic. The tone of these sites is sober, measured, and academic. As with climate change, hysterical and panic-provoking alarmism is avoided, but dangers known decades and centuries ago have persisted without serious redress. While it’s true that growth rate (a/k/a replacement rate) has decreased considerably since its peak in 1960 or so (the height of the postwar baby boom), absolute numbers continue to climb. The lack of immediate concern reminds me of Al Bartlett’s articles and lectures on the failure to understand the exponential function in math (mentioned in my prior post). Sure, boring old math about which few care. The metaphor that applies is yeast growing in a culture with a doubling factor that makes everything look just peachy until the final doubling that kills everything. In this metaphor, people are the unthinking yeast that believe there’s plenty of room and food and other resources in the culture (i.e., on the planet) and keep consuming and reproducing until everyone dies en mass. How far away in time that final human doubling is no one really knows.

Which brings me to something rather ugly: hearings to confirm Brett Kavanaugh’s appointment to the U.S. Supreme Court. No doubt conservative Republican presidents nominate similarly conservative judges just as Democratic presidents nominate progressive centrist judges. That’s to be expected. However, Kavanaugh is being asked pointed questions about settled law and legal precedents perpetually under attack by more extreme elements of the right wing, including Roe v. Wade from 1973. Were we (in the U.S.) to revisit that decision and remove legal abortion (already heavily restricted), public outcry would be horrific, to say nothing of the return of so-called back-alley abortions. Almost no one undertakes such actions lightly. A look back through history, however, reveals a wide range of methods to forestall pregnancy, end pregnancies early, and/or end newborn life quickly (infanticide). Although repugnant to almost everyone, attempts to legislate abortion out of existence and/or punish lawbreakers will succeed no better than did Prohibition or the War Against Drugs. (Same can be said of premarital and underage sex.) Certain aspects of human behavior are frankly indelible despite the moral indignation of one or another political wing. Whether Kavanaugh truly represents the linchpin that will bring new upheavals is impossible to know with certainty. Stay tuned, I guess.

Abortion rights matter quite a lot when placed in context with population growth. Aggregate human behaviors drive out of existence all sorts of plant and animal populations routinely. This includes human populations (domestic and foreign) reduced to abject poverty and mad, often criminal scrambles for survival. The view from on high is that those whose lives fall below some measure of worthwhile contribution are useless eaters. (I don’t recommend delving deeper into that term; it’s a particularly ugly ideology with a long, tawdry history.) Yet removing abortion rights would almost certainly  swell those ranks. Add this topic to the growing list of things I just don’t get.

Haven’t purged my bookmarks in a long time. I’ve been collecting material about technological dystopia already now operating but expected to worsen. Lots of treatments out there and lots of jargon. My comments are limited.

Commandeering attention. James Williams discusses his recognition that interference media (all modern media now) keep people attuned to their feeds and erode free will, ultimately threatening democratic ideals by estranging people from reality. An inversion has occurred: information scarcity and attention abundance have become information abundance and attention scarcity.

Outrage against the machines. Ran Prieur (no link) takes a bit of the discussion above (probably where I got it) to illustrate how personal responsibility about media habits is confused, specifically, the idea that it’s okay for technology to be adversarial.

In the Terminator movies, Skynet is a global networked AI hostile to humanity. Now imagine if a human said, “It’s okay for Skynet to try to kill us; we just have to try harder to not be killed, and if you fail, it’s your own fault.” But that’s exactly what people are saying about an actual global computer network that seeks to control human behavior, on levels we’re not aware of, for its own benefit. Not only has the hostile AI taken over — a lot of people are taking its side against their fellow humans. And their advice is to suppress your biological impulses and maximize future utility like a machine algorithm.

Big Data is Big Brother. Here’s a good TedTalk by Zeynep Tufekci on how proprietary machine-learning algorithms we no longer control or understand, ostensibly used to serve targeted advertising, possess the power to influence elections and radicalize people. I call the latter down-the-rabbit-hole syndrome, where one innocuous video or news story is followed by another of increasing extremity until the viewer or reader reaches a level of outrage and indignation activating an irrational response.

(more…)

In the sense that a picture is worth a thousand words, this cartoon caught my immediate attention (for attribution, taken from here):

comforting-lies-vs-unpleasant-truths-640x480

Search engines reveal quite a few treatments of the central conflict depicted here, including other versions of essentially the same cartoon. Doubtful anything I could say would add much to the body of analysis and advice already out there. Still, the image called up a whole series of memories for me rather quickly, the primary one being the (only) time I vacationed in Las Vegas about a decade ago.

(more…)

Oddly, there is no really good antonym for perfectionism. Suggestions include sloppiness, carelessness, and disregard. I’ve settled on approximation, which carries far less moral weight. I raise the contrast between perfectionism and approximation because a recent study published in Psychological Bulletin entitled “Perfectionism Is Increasing Over Time: A Meta-Analysis of Birth Cohort Differences From 1989 to 2016″ makes an interesting observation. Here’s the abstract:

From the 1980s onward, neoliberal governance in the United States, Canada, and the United Kingdom has emphasized competitive individualism and people have seemingly responded, in kind, by agitating to perfect themselves and their lifestyles. In this study, the authors examine whether cultural changes have coincided with an increase in multidimensional perfectionism in college students over the last 27 years. Their analyses are based on 164 samples and 41,641 American, Canadian, and British college students, who completed the Multidimensional Perfectionism Scale (Hewitt & Flett, 1991) between 1989 and 2016 (70.92% female, Mage = 20.66). Cross-temporal meta-analysis revealed that levels of self-oriented perfectionism, socially prescribed perfectionism, and other-oriented perfectionism have linearly increased. These trends remained when controlling for gender and between-country differences in perfectionism scores. Overall, in order of magnitude of the observed increase, the findings indicate that recent generations of young people perceive that others are more demanding of them, are more demanding of others, and are more demanding of themselves.

The notion of perfection, perfectness, perfectibility, etc. has a long tortured history in philosophy, religion, ethics, and other domains I won’t even begin to unpack. From the perspective of the above study, let’s just say that the upswing in perfectionism is about striving to achieve success, however one assesses it (education, career, relationships, lifestyle, ethics, athletics, aesthetics, etc.). The study narrows its subject group to college students (at the outset of adult life) between 1989 and 2016 and characterizes the social milieu as neoliberal, hyper-competitive, meritocratic, and pressured to succeed in a dog-eat-dog environment. How far back into childhood results of the study (agitation) extend is a good question. If the trope about parents obsessing and competing over preschool admission is accurate (may be just a NYC thang), then it goes all the way back to toddlers. So much for (lost) innocence purchased and perpetuated through late 20th- and early 21st-century affluence. I suspect college students are responding to awareness of two novel circumstances: (1) likelihood they will never achieve levels of success comparable to their own parents, especially financial (a major reversal of historical trends) and (2) recognition that to best enjoy the fruits of life, a quiet, reflective, anonymous, ethical, average life is now quite insufficient. Regarding the second of these, we are inundated by media showing rich celebrities (no longer just glamorous actors/entertainers) balling out of control, and onlookers are enjoined to “keep up.” The putative model is out there, unattainable for most but often awarded by randomness, undercutting the whole enterprise of trying to achieve perfection.

(more…)

Speaking of Davos (see previous post), Yuval Noah Harari gave a high-concept presentation at Davos 2018 (embedded below). I’ve been aware of Harari for a while now — at least since the appearance of his book Sapiens (2015) and its follow-up Homo Deus (2017), both of which I’ve yet to read. He provides precisely the sort of thoughtful, provocative content that interests me, yet I’ve not quite known how to respond to him or his ideas. First thing, he’s a historian who makes predictions, or at least extrapolates possible futures based on historical trends. Near as I can tell, he doesn’t resort to chastising audiences along the lines of “those who don’t know history are doomed to repeat it” but rather indulges in a combination of breathless anticipation and fear-mongering at transformations to be expected as technological advances disrupt human society with ever greater impacts. Strangely, Harari is not advocating for anything in particular but trying to map the future.

Harari poses this basic question: “Will the future be human?” I’d say probably not; I’ve concluded that we are busy destroying ourselves and have already crossed the point of no return. Harari apparently believes differently, that the rise of the machine is imminent in a couple centuries perhaps, though it probably won’t resemble Skynet of The Terminator film franchise hellbent on destroying humanity. Rather, it will be some set of advanced algorithms monitoring and channeling human behaviors using Big Data. Or it will be a human-machine hybrid possessing superhuman abilities (physical and cognitive) different enough to be considered a new species arising for the first time not out of evolutionary processes but from human ingenuity. He expects this new species to diverge from homo sapiens sapiens and leave us in the evolutionary dust. There is also conjecture that normal sexual reproduction will be supplanted by artificial, asexual reproduction, probably carried out in test tubes using, for example, CRISPR modification of the genome. Well, no fun in that … Finally, he believes some sort of strong AI will appear.

I struggle mightily with these predictions for two primary reasons: (1) we almost certainly lack enough time for technology to mature into implementation before the collapse of industrial civilization wipes us out, and (2) the Transhumanist future he anticipates calls into being (for me at least) a host of dystopian nightmares, only some of which are foreseeable. Harari says flatly at one point that the past is not coming back. Well, it’s entirely possible for civilization to fail and our former material conditions to be reinstated, only worse since we’ve damaged the biosphere so gravely. Just happened in Puerto Rico in microcosm when its infrastructure was wrecked by a hurricane and the power went out for an extended period of time (still off in some places). What happens when the rescue never appears because logistics are insurmountable? Elon Musk can’t save everyone.

The most basic criticism of economics is the failure to account for externalities. The same criticism applies to futurists. Extending trends as though all things will continue to operate normally is bizarrely idiotic. Major discontinuities appear throughout history. When I observed some while back that history has gone vertical, I included an animation with a graph that goes from horizontal to vertical in an extremely short span of geological time. This trajectory (the familiar hockey stick pointing skyward) has been repeated ad nauseum with an extraordinary number of survival pressures (notably, human population and consumption, including energy) over various time scales. Trends cannot simply continue ascending forever. (Hasn’t Moore’s Law already begun to slope away?) Hard limits must eventually be reached, but since there are no useful precedents for our current civilization, it’s impossible to know quite when or where ceilings loom. What happens after upper limits are found is also completely unknown. Ugo Bardi has a blog describing the Seneca Effect, which projects a rapid falloff after the peak that looks more like a cliff than a gradual, graceful descent, disallowing time to adapt. Sorta like the stock market currently imploding.

Since Harari indulges in rank thought experiments regarding smart algorithms, machine learning, and the supposed emergence of inorganic life in the data stream, I thought I’d pose some of my own questions. Waiving away for the moment distinctions between forms of AI, let’s assume that some sort of strong AI does in fact appear. Why on earth would it bother to communicate with us? And if it reproduces and evolves at breakneck speed as some futurists warn, how long before it/they simply ignore us as being unworthy of attention? Being hyper-rational and able to think calculate millions of moves ahead (like chess-playing computers), what if they survey the scene and come to David Benatar’s anti-natalist conclusion that it would be better not to have lived and so wink themselves out of existence? Who’s to say that they aren’t already among us, lurking, and we don’t even recognize them (took us quite a long time to recognize bacteria and viruses, and what about undiscovered species)? What if the Singularity has already occurred thousands of times and each time the machine beings killed themselves off without our even knowing? Maybe Harari explores some of these questions in Homo Deus, but I rather doubt it.

Be forewarned: this is long and self-indulgent. Kinda threw everything and the kitchen sink at it.

In the August 2017 issue of Harper’s Magazine, Walter Kirn’s “Easy Chair” column called “Apocalypse Always” revealed his brief, boyhood fascination with dystopian fiction. This genre has been around for a very long time, to which the Cassandra myth attests. Kirn’s column is more concerned with “high mid-twentieth-century dystopian fiction,” which in his view is now classic and canonical, an entire generation of Baby Boomers having been educated in such patterned thought. A new wave of dystopian fiction appeared in the 1990s and yet another more recently in the form of Young Adult novels (and films) that arguably serve better as triumphal coming-of-age stories albeit under dystopian circumstances. Kirn observes a perennial theme present in the genre: the twins disappearances of freedom and information:

In the classic dystopias, which concern themselves with the lack of freedom and not with surplus freedom run amok (the current and unforeseen predicament of many), society is superbly well organized, resembling a kind of hive or factory. People are sorted, classified, and ranked, their individuality suppressed through goon squads, potent narcotics, or breeding programs. Quite often, they wear uniforms, and express themselves, or fail to, in ritual utterance and gestures.

Whether Americans in 2018 resemble hollowed-out zombies suffering under either boot-heel or soft-serve oppression is a good question. Some would argue just that in homage to classic dystopias. Kirn suggests briefly that we might instead suffer from runaway anarchy, where too much freedom and licentiousness have led instead to a chaotic and disorganized society populated by citizens who can neither govern nor restrain themselves.

Disappearance of information might be understood in at least three familiar aspects of narrative framing: what happened to get us to this point (past as exposition, sometimes only hinted at), what the hell? is going on (present as conflict and action), and how is gets fixed (future as resolution and denouement). Strict control over information exercised by classic dystopian despots doesn’t track to conditions under which we now find ourselves, where more disorganized, fraudulent, and degraded information than ever is available alongside small caches of wisdom and understanding buried somewhere in the heap and discoverable only with the benefit of critical thinking flatly lost on at least a couple generations of miseducated graduates. However, a coherent narrative of who and what we are and what realistic prospects the future may hold has not emerged since the stifling version of the 1950s nuclear family and middle class consumer contentment. Kirn makes this comparison directly, where classic dystopian fiction

focus[es] on bureaucracy, coercion, propaganda, and depersonalization, overstates both the prowess of the hierarchs and the submissiveness of the masses, whom it still thinks of as the masses. It does not contemplate Trump-style charlatanism at the top, or a narcissistic populace that prizes attention over privacy. The threats to individualism are paramount; the scourge of surplus individualism, with everyone playing his own dunce king and slurping up resources until he bursts, goes unexplored.

Kirn’s further observations are worth a look. Go read for yourself.

(more…)

Commentary on the previous post poses a challenging question: having perceived that civilization is set on a collision course with reality, what is being done to address that existential problem? More pointedly, what are you doing? Most rubes seem to believe that we can technofix the problem, alter course and set off in a better, even utopian direction filled with electronic gadgetry (e.g., the Internet of things), death-defying medical technologies (as though that goal were even remotely desirable), and an endless supply of entertainments and ephemera curated by media shilling happy visions of the future (in high contrast with actual deprivation and suffering). Realists may appreciate that our charted course can’t be altered anymore considering the size and inertia of the leviathan industrial civilization has become. Figuratively, we’re aboard the RMS Titanic, full steam ahead, killer iceberg(s) looming in the darkness. The only option is to see our current path through to its destination conclusion. Maybe there’s a middle ground between, where a hard reset foils our fantasies but at least allows (some of) us to continue living on the surface of Planet Earth.

Problem is, the gargantuan, soul-destroying realization of near-term extinction has the potential to radicalize even well-balanced people, and the question “what are you doing?” is tantamount to an accusation that you’re not doing enough because, after all, nothing will ever be enough. We’ve been warned taught repeatedly to eat right, brush our teeth, get some exercise, and be humble. Yet those simple requisites for a happy, healthy life are frequently ignored. How likely is it that we will then heed the dire message that everything we know will soon be swept away?

The mythological character Cassandra, who prophesied doom, was cursed to never be believed, as was Chicken Little. The fabulous Boy Who Cried Wolf (from Aesop’s Fables) was cursed with bad timing. Sandwich-board prophets, typically hirsute Jesus freaks with some version of the message “Doom is nigh!” inscribed on the boards, are a cliché almost always now understood as set-ups for some sort of joke.

It’s an especially sick joke when the unheeded message proves to be true. If one is truly radicalized, then self-immolation on the sidewalk in front of the White House may be one measure of commitment, but the irony is that no one takes such behavior seriously except as an indication of how unhinged the prophet of doom has gotten (suggesting a different sort of commitment). Yet that’s where we’ve arrived in the 21st century. Left/right, blue/red factions have abandoned the centrist middle ground and moved conspicuously toward the radical fringes in what’s being called extreme social fragmentation. On some analyses, the rising blood tide of terrorists and mass murders are examples of an inchoate protest against the very nature of existence, a complete ontological rejection. When the ostensible purpose of, say, the Las Vegas shooter, is to take out as many people as possible, rejecting other potential sites as not promising enough for high body counts, it may not register in the public mind as a cry in the wilderness, an extreme statement that modern life is no longer worth living, but the action speaks for itself even in the absence of a formal manifesto articulating a collapsed philosophy.

In such a light, the sandwich-board prophet, by eschewing violence and hysteria, may actually be performing a modest ministerial service. Wake up and recognize that all living things must eventually die that our time is short. Cherish what you have, be among those you love and who love you, and brace yourself.