Archive for the ‘Philosophy’ Category

Returning to Pankaj Mishra’s The Age of Anger, chapter 2 (subtitled “Progress and its Contradictions”) profiles two writers of the 18th-century Enlightenment: François-Marie Arouet (1694–1778), better known by his nom de plume Voltaire, and Jean-Jacques Rousseau (1712–1778). Voltaire was a proponent and embodiment of Enlightenment values and ethics, whereas Rousseau was among the primary critics. Both were hugely influential, and the controversy inherent in their relative perspectives is unresolved even today. First come Rousseau’s criticisms (in Mishra’s prose):

… the new commercial society, which was acquiring its main features of class divisions, inequality and callous elites during the eighteenth century, made its members corrupt, hypocritical and cruel with its prescribed values of wealth, vanity and ostentation. Human beings were good by nature until they entered such a society, exposing themselves to ceaseless and psychologically debilitating transformation and bewildering complexity. Propelled into an endless process of change, and deprived of their peace and stability, human beings failed to be either privately happy or active citizens [p. 87]

This assessment could easily be mistaken for a description of the 1980s and 90s: ceaseless change and turmoil as new technological developments (e.g., the Internet) challenged everyone to reorient and reinvent themselves, often as a brand. Cultural transformation in the 18th century, however, was about more than just emerging economic reconfigurations. New, secular, free thought and rationalism openly challenged orthodoxies formerly imposed by religious and political institutions and demanded intellectual and entrepreneurial striving to participate meaningfully in charting new paths for progressive society purportedly no longer anchored statically in the past. Mishra goes on:

It isn’t just that the strong exploit the weak; the powerless themselves are prone to enviously imitate the powerful. But people who try to make more of themselves than others end up trying to dominate others, forcing them into positions of inferiority and deference. The lucky few on top remain insecure, exposed to the envy and malice of the also-rans. The latter use all means available to them to realize their unfulfilled cravings while making sure to veil them with a show of civility, even benevolence. [p. 89]

Sounds quite contemporary, no? Driving the point home:

What makes Rousseau, and his self-described ‘history of the human heart’, so astonishingly germane and eerily resonant is that, unlike his fellow eighteenth-century writers, he described the quintessential inner experience of modernity for most people: the uprooted outsider in the commercial metropolis, aspiring for a place in it, and struggling with complex feelings of envy, fascination, revulsion and rejection. [p. 90]

While most of the chapter describes Rousseau’s rejection and critique of 18th-century ethics, Mishra at one point depicts Rousseau arguing for instead of against something:

Rousseau’s ideal society was Sparta, small, harsh, self-sufficient, fiercely patriotic and defiantly un-cosmopolitan and uncommercial. In this society at least, the corrupting urge to promote oneself over others, and the deceiving of the poor by the rich, could be counterpoised by the surrender of individuality to public service, and the desire to seek pride for community and country. [p. 92]

Notably absent from Mishra’s profile is the meme mistakenly applied to Rousseau’s diverse criticism: the noble savage. Rousseau praises provincial men (patriarchal orientation acknowledged) largely unspoilt by the corrupting influence of commercial, cosmopolitan society devoted to individual self-interest and amour propre, and his ideal (above) is uncompromising. Although Rousseau had potential to insinuate himself successfully in fashionable salons and academic posts, his real affinity was with the weak and downtrodden — the peasant underclass — who were mostly passed over by rapidly modernizing society. Others managed to raise their station in life above the peasantry to join the bourgeoisie (disambiguation needed on that term). Mishra’s description (via Rousseau) of this middle and upper middle class group provided my first real understanding of popular disdain many report toward bourgeois values using the derisive term bourgie (clearer when spoken than when written).

Profile of Voltaire to follow in part 2.

Advertisements

First, a bit of history. The U.S. Constitution was ratified in 1788 and superseded the Articles of Confederation. The first ten Amendments, ratified in 1791 (rather quickly after the initial drafting and adoption of the main document — oops, forgot these obvious assumptions), are known as the Bill of Rights. The final amendment to date, the 27th Amendment, though proposed in 1789 along with others, was not ratified until 1992. A half dozen additional amendments approved by Congress have not yet been ratified, and a large number of other unapproved amendments have been proposed.

The received wisdom is that, by virtue of its lengthy service as the supreme law of the land, the U.S. Constitution has become sacrosanct and invulnerable to significant criticism and further amendment. That wisdom has begun to be questioned actively as a result of (at least) two factors: (1) recognition that the Federal government serves the common good and citizenry rather poorly, having become corrupt and dysfunctional, and (2) the Electoral College, an anachronism from the Revolutionary Era that skews voting power away from cities, handed two recent presidential elections to candidates who failed to win the popular vote yet won in the Electoral College. For a numerical analysis of how electoral politics is gamed to subvert public opinion, resulting in more government seats held by Republicans than voting (expressing the will of the people) would indicate, see this article by the Brookings Institute.

These are issues of political philosophy and ongoing public debate, spurred by dissatisfaction over periodic Federal shutdowns, power struggles between the executive and legislative branches that are tantamount to holding each other hostage, and income inequality that pools wealth and power in the hands of ever fewer people. The judicial branch (especially the U.S. Supreme Court) is also a significant point of contention; its newly appointed members are increasingly right wing but have not (yet) taken openly activist roles (e.g., reversing Roe v. Wade). As philosophy, questioning the wisdom of the U.S. Constitution requires considerable knowledge of history and comparative government to undertake with equanimity (as opposed to emotionalism). I don’t possess such expert knowledge but will observe that the U.S. is an outlier among nations in relying on a centuries-old constitution, which may not have been the expectation or intent of the drafters.

It might be too strong to suggest just yet that the public feels betrayed by its institutions. Better to say that, for instance, the U.S. Constitution is now regarded as a flawed document — not for its day (with limited Federal powers) but for the needs of today (where the Federal apparatus, including the giant military, has grown into a leviathan). This would explain renewed interest in direct democracy (as opposed to representative government), flirtations with socialism (expanded over the blended system we already have), and open calls for revolution to remove a de facto corporatocracy. Whether the U.S. Constitution can or should survive these challenges is the question.

As a student, practitioner, and patron of the fine arts, I long ago imbibed the sybaritic imploration that beauty and meaning drawn out of sensory stimulation were a significant source of enjoyment, a high calling even. Accordingly, learning to decode and appreciate the conventions of various forms of expression required effort, which was repaid and deepened over a lifetime of experience. I recognize that, because of their former close association with the European aristocracy and American moneyed class, the fine arts (Western genres) have never quite distanced themselves from charges of elitism. However, I’ve always rejected that perspective. Since the latter part of the 20th century, the fine arts have never been more available to people of all walks of life, as crowds at art galleries attest.

Beyond the fine arts, I also recognize that people have a choice of aesthetics. Maybe it’s the pageantry of sports (including the primal ferocity of combat sports); the gastronomic delight of a fine meal, liquor, or cigar; identification with a famous brand; the pampered lifestyles of the rich and famous, with their premium services, personal staffs, and entourages; the sound of a Harley-Davidson motorcycle or a 1970s American muscle car; the sartorial appointments of high fashion and couture; simple biophilia; the capabilities of a smartphone or other tech device; or the brutal rhetoric and racehorse politics of the campaign trail. Take your pick. In no way do I consider the choice of one aesthetic versus another equivalent. Differences of quality and intent are so obvious that any relativist claim asserting false equivalence ought to be dismissed out of hand. However, there is considerable leeway. One of my teachers summed up taste variance handily: “that’s why they make chocolate and vanilla.”

Beauty and meaning are not interchangeable, but they are often sloppily conflated. The meaning found in earnest striving and sacrifice is a quintessential substitute for beauty. Thus, we’re routinely instructed to honor our troops for their service. Patriotic holidays (Independence Day, Memorial Day, Veterans Day, and others) form a thematic group. Considering how the media reflexively valorizes (rarely deploring) acts of force and mayhem authorized and carried out by the state, and how the citizenry takes that instruction and repeats it, it’s fair to say that an aesthetic attaches to such activity. For instance, some remember (with varying degrees of disgust) news anchor Brian Williams waxing rhapsodic over the Syrian conflict. Perhaps Chris Hedges’ book War is a Force That Gives Us Meaning provides greater context. I haven’t read the book, but the title is awfully provocative, which some read as an encomium to war. Book jacket blurbs and reviews indicate more circumspect arguments drawn from Hedges’ experience as a war correspondent.

We’re currently in the so-called season of giving. No one can escape anymore marketing harangues about Black Friday, Small Business Saturday, and Cyber Monday that launch the season. None of those days have much integrity, not that they ever did, since they bleed into each other as retailers strain to get a jump on one or extend another. We’re a thoroughly consumer society, which is itself an aesthetic (maybe I should have written anesthetic). Purchasing decisions are made according to a choice of aesthetics: brand, features, looks, price, etc. An elaborate machinery of psychological prods and inducements has been developed over the decades to influence consumer behavior. (A subgenre of psychology also studies these influences and behaviors.) The same can be said of the shaping of consumer citizen opinion. While some resist being channeled into others’ prescribed thought worlds, the difficulty of maintaining truly original, independent thought in the face of a deluge of both reasonable and bad-faith influence makes succumbing nearly inevitable. Under such condition, one wonders if choice of aesthetic even really exists.

From time to time, I admit that I’m in no position to referee disputes, usually out of my lack of technical expertise in the hard sciences. I also avoid the impossibility of policing the Internet, assiduously pointing out error where it occurs. Others concern themselves with correcting the record and/or reinterpreting argument with improved context and accuracy. However, once in a while, something crosses my desk that gets under my skin. An article by James Ostrowski entitled “What America Has Done To its Young People is Appalling,” published at LewRockwell.com, is such a case. It’s undoubtedly a coincidence that the most famous Rockwell is arguably Norman Rockwell, whose celebrated illustrations for the Saturday Evening Post in particular helped reinforce a charming midcentury American mythology. Lew Rockwell, OTOH, is described briefly at the website’s About blurb:

The daily news and opinion site LewRockwell.com was founded in 1999 by anarcho-capitalists Lew Rockwell … and Burt Blumert to help carry on the anti-war, anti-state, pro-market work of Murray N. Rothbard.

Those political buzzwords probably deserve some unpacking. However, that project falls outside my scope. In short, they handily foist blame for what ills us in American culture on government planning, as distinguished from the comparative freedom of libertarianism. Government earns its share of blame, no doubt, especially with its enthusiastic prosecution of war (now a forever war); but as snapshots of competing political philosophies, these buzzwords are reductive almost to the point of meaninglessness. Ostrowski lays blame more specifically on feminism and progressive big government and harkens back to an idyllic 1950s nuclear family fully consonant with Norman Rockwell’s illustrations, thus invoking the nostalgic frame.

… the idyllic norm of the 1950’s, where the mother typically stayed home to take care of the kids until they reached school age and perhaps even long afterwards, has been destroyed.  These days, in the typical American family, both parents work fulltime which means that a very large percentage of children are consigned to daycare … in the critical first five years of life, the vast majority of Americans are deprived of the obvious benefits of growing up in an intact family with the mother at home in the pre-school years. We baby boomers took this for granted. That world is gone with the wind. Why? Two main reasons: feminism and progressive big government. Feminism encouraged women to get out of the home and out from under the alleged control of husbands who allegedly controlled the family finances.

Problem is, 1950s social configurations in the U.S. were the product of a convergence of historical forces, not least of which were the end of WWII and newfound American geopolitical and economic prominence. More pointedly, an entire generation of young men and women who had deferred family life during perilous wartime were then able to marry, start families, and provide for them on a single income — typically that of the husband/father. That was the baby boom. Yet to enjoy the benefits of the era fully, one probably needed to be a WASPy middle-class male or the child of one. Women and people of color fared … differently. After all, the 1950s yielded to the sexual revolution and civil rights era one decade later, both of which aimed specifically to improve the lived experience of, well, women and people of color.

Since the 1950s were only roughly 60 years ago, it might be instructive to consider how life was another 60 years before then, or in the 1890s. If one lived in an eastern American city, life was often a Dickensian dystopia, complete with child labor, poorhouses, orphanages, asylums, and unhygienic conditions. If one lived in an agrarian setting, which was far more prevalent before the great 20th-century migration to cities, then life was frequently dirt-poor subsistence and/or pioneer homesteading requiring dawn-to-dusk labor. Neither mode yet enjoyed social planning and progressive support including, for example, sewers and other modern infrastructure, public education, and economic protections such as unionism and trust busting. Thus, 19th-century America might be characterized fairly as being closer to anarcho-capitalism than at any time since. One of its principal legacies, one must be reminded, was pretty brutal exploitation of (and violence against) labor, which can be understood by the emergence of political parties that sought to redress its worst scourges. Hindsight informs us now that reforms were slow, partial, and impermanent, leading to the observation that among all tried forms of self-governance, democratic capitalism can be characterized as perhaps the least awful.

So yeah, the U.S. came a long way from 1890 to 1950, especially in terms of standard of living, but may well be backsliding as the 21st-century middle class is hollowed out (a typical income — now termed household income — being rather challenging for a family), aspirations to rise economically above one’s parents’ level no longer function, and the culture disintegrates into tribal resentments and unrealistic fantasies about nearly everything. Ostrowski marshals a variety of demographic facts and figures to support his argument (with which I agree in large measure), but he fails to make a satisfactory causal connection with feminism and progressivism. Instead, he sounds like 45 selling his slogan Make America Great Again (MAGA), meaning let’s turn back the clock to those nostalgic 1950s happy days. Interpretations of that sentiment run in all directions from innocent to virulent (but coded). By placing blame on feminism and progressivism, it’s not difficult to hear anyone citing those putative causes as an accusation that, if only those feminists and progressives (and others) had stayed in their assigned lanes, we wouldn’t be dealing now with cultural crises that threaten to undo us. What Ostrowski fails to acknowledge is that despite all sorts of government activity over the decades, no one in the U.S. is steering the culture nearly as actively as in centrally planned economies and cultures, current and historical, which in their worst instances are fascist and/or totalitarian. One point I’ll agree on, however, just to be charitable, is that the mess we’ve made and will leave to youngsters is truly appalling.

Caveat: Rather uncharacteristically long for me. Kudos if you have the patience for all of this.

Caught the first season of HBO’s series Westworld on DVD. I have a boyhood memory of the original film (1973) with Yul Brynner and a dim memory of its sequel Futureworld (1976). The sheer charisma of Yul Brynner in the role of the gunslinger casts a long shadow over the new production, not that most of today’s audiences have seen the original. No doubt, 45 years of technological development in film production lends the new version some distinct advantages. Visual effects are quite stunning and Utah landscapes have never been used more appealingly in terms of cinematography. Moreover, storytelling styles have changed, though it’s difficult to argue convincingly that they’re necessarily better now than then. Competing styles only appear dated. For instance, the new series has immensely more time to develop its themes; but the ancient parables of hubris and loss of control over our own creations run amok (e.g., Shelley’s Frankenstein, or more contemporaneously, the surprisingly good new movie Upgrade) have compact, appealing narrative arcs quite different from constant teasing and foreshadowing of plot developments while actual plotting proceeds glacially. Viewers wait an awful lot longer in the HBO series for resolution of tensions and emotional payoffs, by which time investment in the story lines has been dispelled. There is also no terrifying crescendo of violence and chaos demanding rescue or resolution. HBO’s Westworld often simply plods on. To wit, a not insignificant portion of the story (um, side story) is devoted to boardroom politics (yawn) regarding who actually controls the Westworld theme park. Plot twists and reveals, while mildly interesting (typically guessed by today’s cynical audiences), do not tie the narrative together successfully.

Still, Westworld provokes considerable interest from me due to my fascination with human consciousness. The initial episode builds out the fictional future world with characters speaking exposition clearly owing its inspiration to Julian Jayne’s book The Origins of Consciousness in the Breakdown of the Bicameral Mind (another reference audiences are quite unlikely to know or recognize). I’ve had the Julian Jaynes Society’s website bookmarked for years and read the book some while back; never imagined it would be captured in modern fiction. Jaynes’ thesis (if I may be so bold as to summarize radically) is that modern consciousness coalesced around the collapse of multiple voices in the head — ideas, impulses, choices, decisions — into a single stream of consciousness perhaps better understood (probably not) as the narrative self. (Aside: the multiple voices of antiquity correspond to polytheism, whereas the modern singular voice corresponds to monotheism.) Thus, modern human consciousness arose over several millennia as the bicameral mind (the divided brain having two camera, chambers, or halves) functionally collapsed. The underlying story of the new Westworld is the emergence of machine consciousness, a/k/a strong AI, a/k/a The Singularity, while the old Westworld was about a mere software glitch. Exploration of machine consciousness modeling (e.g., improvisation builds on memory to create awareness) as a proxy for better understanding human consciousness might not be the purpose of the show, but it’s clearly implied. And although conjectural, the speed of emergence of human consciousness contrasts sharply with the abrupt ON switch regarding theorized machine consciousness. Westworld treats them as roughly equivalent, though in fairness, 35 years or so in Westworld is in fact abrupt compared to several millennia. (Indeed, the story asserts that machine consciousness sparked alive repeatedly (which I suggested here) over those 35 years but was dialed back repeatedly. Never mind all the unexplored implications.) Additionally, the fashion in which Westworld uses the term bicameral ranges from sloppy to meaningless, like the infamous technobabble of Star Trek.

(more…)

I caught the presentation embedded below with Thomas L. Friedman and Yuval Noah Harari, nominally hosted by the New York Times. It’s a very interesting discussion but not a debate. For this now standard format (two or more people sitting across from each other with a moderator and an audience), I’m pleased to observe that Friedman and Harari truly engaged each others’ ideas and behaved with admirable restraint when the other was speaking. Most of these talks are rude and combative, marred by constant interruptions and gotchas. Such bad behavior might succeed in debate club but makes for a frustratingly poor presentation. My further comments follow below.

With a topic as open-ended as The Future of Humanity, arguments and support are extremely conjectural and wildly divergent depending on the speaker’s perspective. Both speakers here admit their unique perspectives are informed by their professions, which boils down to biases borne out of methodology, and to a lesser degree perhaps, personality. Fair enough. In my estimation, Harari does a much better job adopting a pose of objectivity. Friedman comes across as both salesman and a cheerleader for human potential.

Both speakers cite a trio of threats to human civilization and wellbeing going forward. For Harari, they’re nuclear war, climate change, and technological disruption. For Friedman, they’re the market (globalization), Mother Nature (climate change alongside population growth and loss of diversity), and Moore’s Law. Friedman argues that all three are accelerating beyond control but speaks of each metaphorically, such as when refers to changes in market conditions (e.g., from independent to interdependent) as “climate change.” The biggest issue from my perspective — climate change — was largely passed over in favor of more tractable problems.

Climate change has been in the public sphere as the subject of considerable debate and confusion for at least a couple decades now. I daresay it’s virtually impossible not to be aware of the horrific scenarios surrounding what is shaping up to be the end of the world as we know it (TEOTWAWKI). Yet as a global civilization, we’ve barely reacted except with rhetoric flowing in all directions and some greenwashing. Difficult to assess, but perhaps the appearance of more articles about surviving climate change (such as this one in Bloomberg Businessweek) demonstrates that more folks recognize we can no longer stem or stop climate change from rocking the world. This blog has had lots to say about the collapse of industrial civilization being part of a mass extinction event (not aimed at but triggered by and including humans), so for these two speakers to cite but then minimize the peril we face is, well, façile at the least.

Toward the end, the moderator finally spoke up and directed the conversation towards uplift (a/k/a the happy chapter), which almost immediately resulted in posturing on the optimism/pessimism continuum with Friedman staking his position on the positive side. Curiously, Harari invalidated the question and refused to be pigeonholed on the negative side. Attempts to shoehorn discussions into familiar if inapplicable narratives or false dichotomies are commonplace. I was glad to see Harari calling bullshit on it, though others (e.g., YouTube commenters) were easily led astray.

The entire discussion is dense with ideas, most of them already quite familiar to me. I agree wholeheartedly with one of Friedman’s remarks: if something can be done, it will be done. Here, he refers to technological innovation and development. Plenty of prohibitions throughout history not to make available disruptive technologies have gone unheeded. The atomic era is the handy example (among many others) as both weaponry and power plants stemming from cracking the atom come with huge existential risks and collateral psychological effects. Yet we prance forward headlong and hurriedly, hoping to exploit profitable opportunities without concern for collateral costs. Harari’s response was to recommend caution until true cause-effect relationships can be teased out. Without saying it manifestly, Harari is citing the precautionary principle. Harari also observed that some of those effects can be displaced hundreds and thousands of years.

Displacements resulting from the Agrarian Revolution, the Scientific Revolution, and the Industrial Revolution in particular (all significant historical “turnings” in human development) are converging on the early 21st century (the part we can see at least somewhat clearly so far). Neither speaker would come straight out and condemn humanity to the dustbin of history, but at least Harari noted that Mother Nature is quite keen on extinction (which elicited a nervous? uncomfortable? ironic? laugh from the audience) and wouldn’t care if humans were left behind. For his part, Friedman admits our destructive capacity but holds fast to our cleverness and adaptability winning out in the end. And although Harari notes that the future could bring highly divergent experiences for subsets of humanity, including the creation of enhanced humans to and reckless dabbling with genetic engineering, I believe cumulative and aggregate consequences of our behavior will deposit all of us into a grim future no sane person should wish to survive.

YouTube ratings magnet Jordan Peterson had a sit-down with Susan Blackmore to discuss/debate the question, “Do We Need God to Make Sense of Life?” The conversation is lightly moderated by Justin Brierley and is part of a weekly radio broadcast called Unbelievable? (a/k/a The Big Conversation, “the flagship apologetics and theology discussion show on Premier Christian Radio in the UK”). One might wonder why evangelicals are so eager to pit believers and atheists against each other. I suppose earnest questioning of one’s faith is preferable to proselytizing, though both undoubtedly occur. The full episode (47 min.) is embedded below: (more…)

Twice in the last month I stumbled across David Benatar, an anti-natalist philosopher, first in a podcast with Sam Harris and again in a profile of him in The New Yorker. Benatar is certainly an interesting fellow, and I suspect earnest in his beliefs and academic work, but I couldn’t avoid shrugging as he gets caught in the sort of logical traps that plague hyperintellectual folks. (Sam Harris is prone to the same problem.) The anti-natalist philosophy in a nutshell is finding, after tallying the pros and cons of living (sometimes understood as happiness or enjoyment versus suffering), that on balance, it would probably be better never to have lived. Benatar doesn’t apply the finding retroactively by suggesting folks end their lives sooner rather than later, but he does recommend that new life should not be brought into the world — an interdiction almost no parent would consider for more than a moment.

The idea that we are born against our will, never asked whether we wanted life in the first place, is an obvious conundrum but treated as a legitimate line of inquiry in Benatar’s philosophy. The kid who throws the taunt “I never asked to be born!” to a parent in the midst of an argument might score an emotional hit, but there is no logic to the assertion. Language is full of logic traps like this, such as “an infinity of infinities” (or multiverse), “what came before the beginning?” or “what happens after the end?” Most know to disregard the former, but entire religions are based on seeking the path to the (good) afterlife as if conjuring such a proposition manifests it in reality. (more…)

This Savage Love column got my attention. As with Dear Abby, Ask Marylin, or indeed any advice column, I surmise that questions are edited for publication. Still, a couple minor usage errors attracted my eye, which I can let go without further chastising comment. More importantly, question and answer both employ a type of Newspeak commonplace among those attuned to identity politics. Those of us not struggling with identity issues may be less conversant with this specialized language, or it could be a generational thing. Coded speech is not unusual within specialized fields of endeavor. My fascination with nomenclature and neologisms makes me pay attention, though I’m not typically an adopter of hip new coin.

The Q part of Q&A never actually asks a question but provides context to suggest or extrapolate one, namely, “please advise me on my neuro-atypicality.” (I made up that word.) While the Q acknowledges that folks on the autism spectrum are not neurotypical, the word disability is put in quotes (variously, scare quotes, air quotes, or irony quotes), meaning that it is not or should not be considered a real or true disability. Yet the woman acknowledges her own difficulty with social signaling. The A part of Q&A notes a marked sensitivity to social justice among those on the spectrum, acknowledges a correlation with nonstandard gender identity (or is it sexual orientation?), and includes a jibe that standard advice is to mimic neurotypical behaviors, which “tend to be tediously heteronormative and drearily vanilla-centric.” The terms tediously, drearily , and vanilla push unsubtly toward normalization and acceptance of kink and aberrance, as does Savage Love in general. I wrote about this general phenomenon in a post called “Trans is the New Chic.”

Whereas I have no hesitation to express disapproval of shitty people, shitty things, and shitty ideas, I am happy to accept many mere differences as not caring two shits either way. This question asks about something fundamental human behavior: sexual expression. Everyone needs an outlet, and outliers (atypicals, nonnormatives, kinksters, transgressors, etc.) undoubtedly have more trouble than normal folks. Unless living under a rock, you’ve no doubt heard and/or read theories from various quarters that character distortion often stems from sexual repression or lack of sexual access, which describes a large number of societies historical and contemporary. Some would include the 21st-century U.S. in that category, but I disagree. Sure, we have puritanical roots, recent moral panic over sexual buffoonery and crimes, and a less healthy sexual outlook than, say, European cultures, but we’re also suffused in licentiousness, Internet pornography, and everyday seductions served up in the media via advertising, R-rated cinema, and TV-MA content. It’s a decidedly mixed bag.

Armed with a lay appreciation of sociology, I can’t help but to observe that humans are a social species with hierarchies and norms, not as rigid or prescribed perhaps as with insect species, but nonetheless possessing powerful drives toward consensus, cooperation, and categorization. Throwing open the floodgates to wide acceptance of aberrant, niche behaviors strikes me as swimming decidedly upstream in a society populated by a sizable minority of conservatives mightily offended by anything falling outside the heteronormative mainstream. I’m not advocating either way but merely observing the central conflict.

All this said, the thing that has me wondering is whether autism isn’t itself an adaptation to information overload commencing roughly with the rise of mass media in the early 20th century. If one expects that the human mind is primarily an information processor and the only direction is to process ever more information faster and more accurately than in the past, well, I have some bad news: we’re getting worse at it, not better. So while autism might appear to be maladaptive, filtering out useless excess information might unintuitively prove to be adaptive, especially considering the disposition toward analytical, instrumental thinking exhibited by those on the spectrum. How much this style of mind is valued in today’s world is an open question. I also don’t have an answer to the nature/nurture aspect of the issue, which is whether the adaptation/maladaptation is more cultural or biological. I can only observe that it’s on the rise, or at least being recognized and diagnosed more frequently.

I watched a documentary on Netflix called Jim & Andy (2017) that provides a glimpse behind the scenes of the making of Man on the Moon (1999) where Jim Carrey portrays Andy Kaufman. It’s a familiar story of art imitating life (or is it life imitating art?) as Carrey goes method and essentially channels Kaufman and Kaufman’s alter ego Tony Clifton. A whole gaggle of actors played earlier incarnations of themselves in Man on the Moon and appeared as themselves (without artifice) in Jim & Andy, adding another weird dimension to the goings on. Actors losing themselves in roles and undermining their sense of self is hardly novel. Regular people lose themselves in their jobs, hobbies, media hype, glare of celebrity, etc. all the time. From an only slightly broader perspective, we’re all merely actors playing roles, shifting subtly or dramatically based on context. Shakespeare observed it centuries ago. However, the documentary points to a deeper sense of unreality precisely because Kaufman’s principal shtick was to push discomfiting jokes/performances beyond the breaking point, never dropping the act to let his audience in on the joke or provide closure. It’s a manifestation of what I call the Disorientation Protocol.

(more…)