Posts Tagged ‘Culture’

There is something ironic and vaguely tragic about how various Internet platforms — mostly search engines and social media networks — have unwittingly been thrust into roles their creators never envisioned for themselves. Unless I’m mistaken, they launched under the same business model as broadcast media: create content, or better yet, crowd-source content, to draw in viewers and subscribers whose attention is then delivered to advertisers. Revenue is derived from advertisers while the basic services — i.e., search, job networking, encyclopedias and dictionaries, or social connection — are given away gratis. The modest inconveniences and irritations of having the screen littered and interrupted with ads is a trade-off most end users are happy to accept for free content.

Along the way, some platform operators discovered that user data itself could be both aggregated and individualized and subsequently monetized. This second step unwittingly created so-called surveillance capitalism that Shoshana Zuboff writes about in her recently published book (previously blogged about it here). Essentially, an Orwellian Big Brother (several of them, in fact) tracks one’s activity through smart phone apps and Web browsers, including GPS data revealing movement through real space, not just virtual spaces. This is also the domain of the national security state from local law enforcement to the various security branches of the Federal government: dragnet surveillance where everyone is watched continuously. Again, end users shrug off surveillance as either no big deal or too late to resist.

The most recent step is that, like the Internet itself, various platforms have been functioning for some time already as public utilities and accordingly fallen under demand for regulation with regard to authenticity, truth, and community standards of allowable speech. Thus, private corporations have been thrust unexpectedly into the role of regulating content. Problem is, unlike broadcast networks that create their own content and can easily enforce restrictive standards, crowd-sourced platforms enable the general population to upload its own content, often mere commentary in text form but increasingly as video content, without any editorial review. These platforms have parried by deploying and/or modifying their preexisting surveillance algorithms in search of objectionable content normally protected as free speech and taken steps to remove content, demonetize channels, and ban offending users indefinitely, typically without warning and without appeal.

If Internet entrepreneurs initially got into the biz to make a few (or a lot of) quick billions, which some few of them have, they have by virtue of the global reach of their platforms been transformed into censors. It’s also curious that by enabling end uses to publish to their platforms, they’ve given voice to the masses in all their unwashed glory. Now, everyone’s crazy, radicalized uncle (or sibling or parent or BFF) formerly banished to obscurity railing against one thing or another at the local tavern, where he was tolerated as harmless so long as he kept his bar tab current, is proud to fly his freak flag anywhere and everywhere. Further, the anonymous coward who might issue death or bomb threats to denounce others has been given means to distribute hate across platforms and into the public sphere, where it gets picked up and maybe censored. Worst of all, the folks who monitor and decide what is allowed, functioning as modern-day thought police, are private citizens and corporations with no oversight or legal basis to act except for the fact that everything occurs on their respective platforms. This is a new aspect to the corporatocracy but not one anyone planned.

Advertisements

Throughout human history, the question “who should rule?” has been answered myriad ways. The most enduring answer is simple: he who can muster and deploy the most force of arms and then maintain control over those forces. Genghis Khan is probably the most outrageously successful example and is regarded by the West as a barbarian. Only slightly removed from barbarians is the so-called Big Man, who perhaps adds a layer of diplomacy by running a protection racket while selectively providing and distributing spoils. As societies move further away from subsistence and immediacy, various absolute rulers are established, often through hereditary title. Call them Caesar, chief, dear leader, emir, emperor (or empress), kaiser, king (or queen), pharaoh, premier, el presidente, sultan, suzerain, or tsar, they typically acquire power through the accident of birth and are dynastic. Some are female but most are male, and they typically extract tribute and sometimes demand loyalty oaths.

Post-Enlightenment, rulers are frequently democratically elected administrators (e.g., legislators, technocrats, autocrats, plutocrats, kleptocrats, and former military) ideally meant to be representative of common folks. In the U.S., members of Congress (and of course the President) are almost wholly drawn from the ranks of the wealthy (insufficient wealth being a de facto bar to office) and are accordingly estranged from American life the many different ways most of us experience it. Below the top level of visible, elected leaders is a large, hidden apparatus of high-level bureaucratic functionaries (often appointees), the so-called Deep State, that is relatively stable and made up primarily of well-educated, white-collar careerists whose ambitions for themselves and the country are often at odds with the citizenry.

I began to think about this in response to a rather irrational reply to an observation I made here. Actually, it wasn’t even originally my observation but that of Thomas Frank, namely, that the Deep State is largely made up of the liberal professional class. The reply reinforced the notion who better to rule than the “pros”? History makes the alternatives unthinkable. Thus, the Deep State’s response to the veritable one-man barbarian invasion of the Oval Office has been to seek removal of the interloper by hook or by crook. (High office in this case was won unexpectedly and with unnamed precedent by rhetorical force — base populism — rather than by military coup, making the current occupant a quasi-cult leader; similarly, extracted tribute is merely gawking attention rather than riches.)

History also reveals that all forms of political organization suffer endemic incompetence and corruption, lending truth to Winston Churchill’s witticism “Democracy is the worst form of government, except for all the others.” Indeed, recent rule by technocrats has been supremely awful, leading to periodic market crashes, extreme wealth inequality, social stigmatization, and forever wars. Life under such rule is arguably better than under various other political styles; after all, we gots our vaunted freedoms and enviable material comforts. But the exercise of those freedoms does not reliably deliver either ontological security or psychological certainty we humans crave. In truth, our current form of self-governance has let nothing get in the way of plundering the planet for short-term profit. That ongoing priority is making Earth uninhabitable not just for other species but for humans, too. In light of this fact, liberal technocratic democracy could be a far worse failure than most: it will have killed billions (an inevitability now under delayed effect).

Two new grassroots movements (to my knowledge) have appeared that openly question who should rule: the Sunrise Movement (SM) and the Extinction Rebellion (ER). SM is a youth political movement in the U.S. that acknowledges climate change and supports the Green New Deal as a way of prioritizing the desperate existential threat modern politics and society have become. For now at least, SM appears to be content with working within the system, replacing incumbents with candidates it supports. More intensely, ER is a global movement centered in the U.K. that also acknowledges that familiar modern forms of social and political organization (there are several) no longer function but in fact threaten all of us with, well, extinction. One of its unique demands is that legislatures be drawn via sortition from the general population to be more representative of the people. Further, sortition avoids the established pattern of those elected to lead representational governments from being corrupted by the very process of seeking and attaining office.

I surmise attrition and/or replacement (the SM path) are too slow and leave candidates vulnerable to corruption. In addition, since no one relinquishes power willingly, current leaders will have to be forced out via open rebellion (the ER path). I’m willing to entertain either path but must sadly conclude that both are too little, too late to address climate change and near-term extinction effectively. Though difficult to establish convincingly, I suspect the time to act was in the 1970s (or even before) when the Ecology Movement arose in recognition that we cannot continue to despoil our own habitat without consequence. That message (social, political, economic, and scientific all at once) was as inert then as it is now. However, fatalism acknowledged, some other path forward is better than our current systems of rule.

Everyone is familiar with the convention in entertainment media where characters speak without the use of recognizable language. (Not related really to the convention of talking animals.) The first instance I can recall (someone correct me if earlier examples are to be found) is the happy-go-lucky bird Woodstock from the old Peanuts cartoons (do kids still recognize that cast of characters?), whose dialog was shown graphically as a series of vertical lines:

When the cartoon made its way onto TV for holiday specials, its creator Charles Schultz used the same convention to depict adults, never shown onscreen but with dialogue voiced by a Harmon-muted trombone. Roughly a decade later, two characters from the Star Wars franchise “spoke” in languages only other Star Wars characters could understand, namely, Chebacca (Chewie) and R2D2. More recently, the character Groot from Guardians of the Galaxy (known to me only through the Marvel movie franchise, not through comic books) speaks only one line of dialogue, “I am Groot,” which is understood as full speech by others Guardians characters. When behemoths larger than a school bus (King Kong, Godzilla, Jurassic dinosaurs, Cloverfield, Kaiju, etc.) appear, the characters are typically denied the power of speech beyond the equivalent of a lion’s roar. (True villains talk little or not at all as they go about their machinations — no monologuing! unless it’s a James Bond film. An exception notable for its failure to charm audiences is Ultron, who wouldn’t STFU. You can decide for yourself which is the worse kind of villainy.)

This convention works well enough for storytelling and has the advantage of allowing the reader/viewer to project onto otherwise blank speech. However, when imported into the real world, especially in politics, the convention founders. There is no Babelfish universal translator inserted in the ear to transform nonsense into coherence. The obvious example of babblespeech is 45, whose speech when off the teleprompter is a series of rambling non sequiturs, free associations, slogans, and sales pitches. Transcripts of anyone’s extemporaneous speech reveal lots of restarts and blind alleys; we all interrupt ourselves to redirect. However, word salad that substitutes for meaningful content in 45’s case is tragicomic: alternately entirely frustrating or comically entertaining depending on one’s objective. Satirical news shows fall into the second category.

45 is certainly not the first. Sarah Palin in her time as a media darling (driver of ratings and butt of jokes — sound familiar?) had a knack for crazy speech combinations that were utter horseshit yet oddly effective for some credulous voters. She was even a hero to some (nearly a heartbeat away from being the very first PILF). We’ve also now been treated to a series of public interrogations where a candidate for a cabinet post or an accused criminal offers testimony before a congressional panel. Secretary of Education Betsy DeVos famously evaded simple yes/no questions during her confirmation hearing, and Supreme Court Justice Brett Kavanaugh similarly refused to provide direct answers to direct questions. Unexpectedly, sacrificial lamb Michael Cohen does give direct answers to many questions, but his interlocutors then don’t quite know how to respond considering their experience and expectation that no one answers appropriately.

What all this demonstrates is that there is often a wide gulf between what is said and what is heard. In the absence of what might be understood as effective communication (honest, truthful, and forthright), audiences and voters fill in the blanks. Ironically, we also can’t handle hear too much truth when confronted by its awfulness. None of this is a problem in storytelling, but when found in politic narratives, it’s emblematic of how dysfunctional our communications have become, and with them, the clear thought and principled activity of governance.

First, a bit of history. The U.S. Constitution was ratified in 1788 and superseded the Articles of Confederation. The first ten Amendments, ratified in 1791 (rather quickly after the initial drafting and adoption of the main document — oops, forgot these obvious assumptions), are known as the Bill of Rights. The final amendment to date, the 27th Amendment, though proposed in 1789 along with others, was not ratified until 1992. A half dozen additional amendments approved by Congress have not yet been ratified, and a large number of other unapproved amendments have been proposed.

The received wisdom is that, by virtue of its lengthy service as the supreme law of the land, the U.S. Constitution has become sacrosanct and invulnerable to significant criticism and further amendment. That wisdom has begun to be questioned actively as a result of (at least) two factors: (1) recognition that the Federal government serves the common good and citizenry rather poorly, having become corrupt and dysfunctional, and (2) the Electoral College, an anachronism from the Revolutionary Era that skews voting power away from cities, handed two recent presidential elections to candidates who failed to win the popular vote yet won in the Electoral College. For a numerical analysis of how electoral politics is gamed to subvert public opinion, resulting in more government seats held by Republicans than voting (expressing the will of the people) would indicate, see this article by the Brookings Institute.

These are issues of political philosophy and ongoing public debate, spurred by dissatisfaction over periodic Federal shutdowns, power struggles between the executive and legislative branches that are tantamount to holding each other hostage, and income inequality that pools wealth and power in the hands of ever fewer people. The judicial branch (especially the U.S. Supreme Court) is also a significant point of contention; its newly appointed members are increasingly right wing but have not (yet) taken openly activist roles (e.g., reversing Roe v. Wade). As philosophy, questioning the wisdom of the U.S. Constitution requires considerable knowledge of history and comparative government to undertake with equanimity (as opposed to emotionalism). I don’t possess such expert knowledge but will observe that the U.S. is an outlier among nations in relying on a centuries-old constitution, which may not have been the expectation or intent of the drafters.

It might be too strong to suggest just yet that the public feels betrayed by its institutions. Better to say that, for instance, the U.S. Constitution is now regarded as a flawed document — not for its day (with limited Federal powers) but for the needs of today (where the Federal apparatus, including the giant military, has grown into a leviathan). This would explain renewed interest in direct democracy (as opposed to representative government), flirtations with socialism (expanded over the blended system we already have), and open calls for revolution to remove a de facto corporatocracy. Whether the U.S. Constitution can or should survive these challenges is the question.

Some while back, Scott Adams (my general disdain for him noted but unexpanded, since I’m not in the habit of shitting on people), using his knowledge of hypnosis, began pushing the string selling the narrative that our Commander-in-Chief is cannily adept at the art of persuasion. I, for one, am persuaded by neither Adams nor 45 but must admit that many others are. Constant shilling for control of narratives by agents of all sorts could not be more transparent (for me at least), rendering the whole enterprise null. Similarly, when I see an advertisement (infrequently, I might add, since I use ad blockers and don’t watch broadcast TV or news programs), I’m rarely inclined to seek more information or make a purchase. Once in a long while, an ad creeps through my defenses and hits one of my interests, and even then, I rarely respond because, duh, it’s an ad.

In the embedded video below, Stuart Ewen describes how some learned to exploit a feature (not a bug) in human cognition, namely, appeals to emotion that overwhelm rational response. The most obvious, well-worn example is striking fear into people’s hearts and minds to convince them of an illusion of safety necessitating relinquishing civil liberties and/or fighting foreign wars.

The way Ewen uses the term consciousness differs from the way I use it. He refers specifically to opinion- and decision-making (the very things vulnerable to manipulation) rather than the more generalized and puzzling property of having an individual identity or mind and with it self-awareness. In fact, Ewen uses the terms consciousness industry and persuasion industry instead of public relations and marketing to name those who spin information and thus public discourse. At some level, absolutely everyone is guilty of seeking to persuade others, which again is a basic feature of communication. (Anyone negotiating the purchase of, say, a new or used car faces the persuasion of the sales agent with some skepticism.) What turns it into something maniacal is using lies and fabrication to advance agendas against the public interest, especially where public opinion is already clear.

Ewen also points to early 20th-century American history, where political leaders and marketers were successful in manipulating mass psychology in at least three ways: 1. drawing the pacifist U.S. public into two world wars of European origin, 2. transforming citizens into consumers, thereby saving capitalism from its inherently self-destructive endgame (creeping up on us yet again), and 3. suppressing emergent collectivism, namely, socialism. Of course, unionism as a collectivist institution still gained considerable strength but only within the larger context of capitalism, e.g., achieving the American Dream in purely financial terms.

So getting back to Scott Adams’ argument, the notion that the American public is under some form of mass hypnosis (persuasion) and that 45 is the master puppeteer is perhaps half true. Societies do sometimes go mad and fall under the spell of a mania or cult leader. But 45 is not the driver of the current episode, merely the embodiment. I wouldn’t say that 45 figured out anything because that awards too much credit to presumed understanding and planning. Rather, he worked out (accidentally and intuitively — really by default considering his job in 2016) that his peculiar self-as-brand could be applied to politics by treating it all as reality TV, which by now everyone knows is its own weird unreality the same way professional wrestling is fundamentally unreal. (The term political theater applies here.) He demonstrated a knack (at best) for keeping the focus firmly on himself and driving ratings (abetted by the mainstream media that had long regarded him as a clown or joke), but those objectives were never really in service of a larger political vision. In effect, the circus brought to town offers its own bizarre constructed narrative, but its principle characteristic is gawking, slack-jawed, made-you-look narcissism, not any sort of proper guidance or governance.

Have to admit, when I first saw this brief article about middle school kids being enrolled in mandatory firearms safety classes, my gut response was something sarcastic to the effect “No, this won’t end badly at all ….” Second thought (upon reading the headline alone) was that it had to be Texas. Now that I’ve calmed down some, both responses are no longer primary in my thinking.

I’ve written before about perception and function of guns of differing types. I daresay that little clarity has been achieved on the issue, especially because a gun is no longer understood as a tool (with all the manifold purposes that might entail) but is instead always a weapon (both offensive and defensive). The general assumption is that anyone brandishing a weapon (as in open carry) is preparing to use it imminently (so shoot first!). A corollary is that anyone merely owning a gun is similarly prepared but only in the early, hypothetical, or contingent stages. These are not entirely fair assumptions but demonstrate how our perception of the tool has frequently shifted over toward emotionalism.

My father’s generation may be among the last without specialized training (e.g., hunters and those having served in the military) who retain the sense of a gun being a tool, both of which still account for quite a lot of people. Periodic chain e-mails sometimes point out that students (especially at rural and collar county schools) used to bring guns to school to stow in their lockers for after-school use with Gun Club. I’d say “imagine doing that now” except that Iowa is doing just that, though my guess is that the guns are stored more securely than a student locker. Thus, exposure to gun safety/handling and target practice may remove some of the stigma assigned to the tool as well as teach students respect for the tool.

Personally, I’ve had limited exposure to guns and tend to default (unthinkingly, reflexively) to what I regard as a liberal/progressive left opinion that I don’t want to own a gun and that guns should be better regulated to stem gun violence. However, only a little circumspection is needed to puncture that one-size-fits-all bubble. And as with so many complicated issues of the day, it’s a little hard to know what to wish for or to presume that I have the wisdom to know better than others. Maybe Iowa has it right and this may not end so badly.

As I reread what I wrote 2.5 years ago in my first blog on this topic, I surmise that the only update needed to my initial assessment is a growing pile of events that demonstrate my thesis: our corrupted information environment is too taxing on human cognition, with the result that a small but growing segment of society gets radicalized (wound up like a spring) and relatively random individuals inevitably pop, typically in a self-annihilating gush of violence. News reports bear this out periodically, as one lone-wolf kook after another takes it upon himself (are there any examples of females doing this?) to shoot or blow up some target, typically chosen irrationally or randomly though for symbolic effect. More journalists and bloggers are taking note of this activity and evolving or resurrecting nomenclature to describe it.

The earliest example I’ve found offering nomenclature for this phenomenon is a blog with a single post from 2011 (oddly, no follow-up) describing so-called stochastic terrorism. Other terms include syntactic violence, semantic violence, and epistemic violence, but they all revolve around the same point. Whether on the sending or receiving end of communications, some individuals are particularly adept at or sensitive to dog whistles that over time activate and exacerbate tendencies toward radical ideology and violence. Wired has a brief article from a few days ago discussing stochastic terrorism as jargon, which is basically what I’m doing here. Admittedly, the last of these terms, epistemic violence (alternative: epistemological violence), ranges farther afield from the end effect I’m calling wind-up toys. For instance, this article discussing structural violence is much more academic in character than when I blogged on the same term (one of a handful of “greatest hits” for this blog that return search-engine hits with some regularity). Indeed, just about any of my themes and topics can be given a dry, academic treatment. That’s not my approach (I gather opinions differ on this account, but I insist that real academic work is fundamentally different from my armchair cultural criticism), but it’s entirely valid despite being a bit remote for most readers. One can easily get lost down the rabbit hole of analysis.

If indeed it’s mere words and rhetoric that transform otherwise normal people into criminals and mass murderers, then I suppose I can understand the distorted logic of the far Left that equates words and rhetoric themselves with violence, followed by the demand that they be provided with warnings and safe spaces lest they be triggered by what they hear, read, or learn. As I understand it, the fear is not so much that vulnerable, credulous folks will be magically turned into automatons wound up and set loose in public to enact violent agendas but instead that virulent ideas and knowledge (including many awful truths of history) might cause discomfort and psychological collapse akin to what happens to when targets of hate speech and death threats are reduced, say, to quivering agoraphobia. Desire for protection from harm is thus understandable. The problem with such logic, though, is that protections immediately run afoul of free speech, a hallowed but misunderstood American institution that preempts quite a few restrictions many would have placed on the public sphere. Protections also stall learning and truth-seeking straight out of the gate. And besides, preemption of preemption doesn’t work.

In information theory, the notion of a caustic idea taking hold of an unwilling person and having its wicked way with him or her is what’s called a mind virus or meme. The viral metaphor accounts for the infectious nature of ideas as they propagate through the culture. For instance, every once in a while, a charismatic cult emerges and inducts new members, a suicide cluster appears, or suburban housewives develop wildly disproportionate phobias about Muslims or immigrants (or worse, Muslim immigrants!) poised at their doorsteps with intentions of rape and murder. Inflaming these phobias, often done by pundits and politicians, is precisely the point of semantic violence. Everyone is targeted but only a few are affected to the extreme of acting out violently. Milder but still invalid responses include the usual bigotries: nationalism, racism, sexism, and all forms of tribalism, “othering,” or xenophobia that seek to insulate oneself safely among like folks.

Extending the viral metaphor, to protect oneself from infectious ideas requires exposure, not insulation. Think of it as a healthy immune system built up gradually, typically early in life, through slow, steady exposure to harm. The alternative is hiding oneself away from germs and disease, which has the ironic result of weakening the immune system. For instance, I learned recently that peanut allergies can be overcome by gradual exposure — a desensitization process — but are exacerbated by removal of peanuts from one’s environment and/or diet. This is what folks mean when they say the answer to hate speech is yet more (free) speech. The nasty stuff can’t be dealt with properly when it’s quarantined, hidden away, suppressed, or criminalized. Maybe there are exceptions. Science fiction entertains those dangers with some regularity, where minds are shunted aside to become hosts for invaders of some sort. That might be overstating the danger somewhat, but violent eruptions may provide some credence.

Several politicians on the U.S. national stage have emerged in the past few years as firebrands of new politics and ideas about leadership — some salutary, others less so. Perhaps the quintessential example is Bernie Sanders, who identified himself as Socialist within the Democratic Party, a tacit acknowledgement that there are no electable third-party candidates for high office thus far. Even 45’s emergence as a de facto independent candidate within the Republican Party points to the same effect (and at roughly the same time). Ross Perot and Ralph Nader came closest in recent U.S. politics to establishing viable candidacies outside the two-party system, but their ultimate failures only reinforce the rigidity of modern party politics; it’s a closed system.

Those infusing energy and new (OK, in truth, they’re old) ideas into this closed system are intriguing. By virtue of his immediate name/brand recognition, Bernie Sanders can now go by his single given name (same is true of Hillary, Donald, and others). Supporters of Bernie’s version of Democratic Socialism are thus known as Bernie Bros, though the term is meant pejoratively. Considering his age, however, Bernie is not widely considered a viable presidential candidate in the next election cycle. Among other firebrands, I was surprised to find Alexandria Ocasio-Cortez (often referred to simply as AOC) described in the video embedded below as a Democratic Socialist but without any reference to Bernie (“single-handedly galvanized the American people”):

Despite the generation(s) gap, young adults had no trouble supporting Bernie three years ago but appear to have shifted their ardent support to AOC. Yet Bernie is still relevant and makes frequent statements demonstrating how well he understands the failings of the modern state, its support of the status quo, and the cult of personality behind certain high-profile politicians.

As I reflect on history, it occurs to me that many of the major advances in society (e.g., abolition, suffrage, the labor movement, civil rights, equal rights and abortion, and the end of U.S. involvement in the Vietnam War) occurred not because our government led us to them but because the American people forced the issues. The most recent examples of the government yielding to the will of the people are gay marriage and cannabis/hemp legalization (still underway). I would venture that Johnson and Nixon were the last U.S. presidents who experienced palpable fear of the public. (Claims that Democrats are afraid of AOC ring hollow — so far.) As time has worn on, later presidents have been confident in their ability to buffalo the public or at least to use the power of the state to quell unrest (e.g., the Occupy movement). (Modern means of crowd control raise serious questions about the legitimacy of any government that would use them against its own citizens. I would include enemy combatants, but that is a separate issue.) In contrast with salutary examples of the public using its disruptive power over a recalcitrant government are arguably more examples where things went haywire rather badly. Looking beyond the U.S., the French Reign of Terror and the Bolsheviks are the two examples that leap immediately to mind, but there are plenty of others. The pattern appears to be a populist ideology that takes root and turns virulent and violent followed by consolidation of power by those who mange to survive the inevitable purge of dissidents.

I bring this up because we’re in a period of U.S. history characterized by populist ideological possession on both sides (left/right) of the political continuum, though politics ought to be better understood as a spectrum. Extremism has again found a home (or several), and although the early stages appear to be mild or harmless, I fear that a charismatic leader might unwittingly succeed in raising a mob. As the saying goes (from the Indiana Jones movie franchise), “You are meddling with forces you cannot possibly comprehend,” to which I would add cannot control. Positioning oneself at the head of a movement or rallying behind such an opportunist may feel like the right thing to do but could easily and quickly veer into wildly unintended consequences. How many times in history has that already occurred?

For ambulatory creatures, vision is arguably the primary sense of the five (main) senses. Humans are among those species that stand upright, facilitating a portrait orientation when interacting among ourselves. The terrestrial environment on which we live, however, is in landscape (as distinguished from the more nearly 3D environments of birds and insects in flight or marine life in rivers, lakes, seas, and oceans). My suspicion is that modest visual conflict between portrait and landscape is among the dynamics that give rise to the orienting response, a step down from the startle reflex, that demands full attention when visual environments change.

I recall reading somewhere that wholesale changes in surroundings, such as when crossing a threshold, passing through a doorway, entering or exiting a tunnel, and notably, entering and exiting an elevator, trigger the orienting response. Indeed, the flush of disorientation before one gets his or her bearings is tantamount to a mind wipe, at least momentarily. This response may also help to explain why small, bounded spaces such as interiors of vehicles (large and small) in motion feel like safe, contained, hermetically sealed personal spaces. We orient visually and kinesthetically at the level of the interior, often seated and immobile, rather than at the level of the outer landscape being traversed by the vehicle. This is true, too, of elevators, a modern contraption that confounds the nervous system almost as much as revolving doors — particularly noticeable with small children and pets until they become habituated to managing such doorways with foreknowledge of what lies beyond.

The built environment has historically included transitional spaces between inner and outer environments. Churches and cathedrals include a vestibule or narthex between the exterior door and inner door leading to the church interior or nave. Additional boundaries in church architecture mark increasing levels of hierarchy and intimacy, just as entryways of domiciles give way to increasingly personal spaces: parlor or sitting room, living room, dining room, kitchen, and bedroom. (The sheer utility of the “necessary” room defies these conventions.) Commercial and entertainment spaces use lobbies, atria, and prosceniums in similar fashion.

What most interests me, however, is the transitional space outside of buildings. This came up in a recent conversation, where I observed that local school buildings from the early to middle part of the 20th century have a distinguished architecture set well back from the street where lawns, plazas, sidewalks, and porches leading to entrances function as transitional spaces and encourage social interaction. Ample window space, columnar entryways, and roof embellishments such as dormers, finials, cupolas, and cornices add style and character befitting dignified public buildings. In contrast, 21st-century school buildings in particular and public buildings in general, at least in the city where I live, tend toward porchless big-box warehouses built right up to the sidewalk, essentially robbing denizens of their social space. Blank, institutional walls forbid rather than invite. Consider, for example, how students gathered in a transitional space are unproblematic, whereas those congregated outside a school entrance abutting a narrow sidewalk suggest either a gauntlet to be run or an eruption of violence in the offing. (Or maybe they’re just smoking.) Anyone forced to climb past loiterers outside a commercial establishment experiences similar suspicions and discomforts.

Beautifully designed and constructed public spaces of yore — demonstrations of a sophisticated appreciation of both function and intent — have fallen out of fashion. Maybe they understood then how transitional spaces ease the orientation response, or maybe they only intuited it. Hard to say. Architectural designs of the past acknowledged and accommodated social functions and sophisticated aesthetics that are today actively discouraged except for pointless stunt architecture that usually turns into boondoggles for taxpayers. This has been the experience of many municipalities when replacing or upgrading schools, transit centers, sports arenas, and public parks. Efficient land use today drives toward omission of transitional space. One of my regular reads is James Howard Kunstler’s Eyesore of the Month, which profiles one architectural misfire after the next. He often mocks the lack of transitional space, or when present, observes its open hostility to pedestrian use, including unnecessary obstacles and proximity to vehicular traffic (noise, noxious exhaust, and questionable safety) discouraging use. Chalk this up as another collapsed art (e.g., painting, music, literature, and poetry) so desperate to deny the past and establish new aesthetics that it has ruined itself.

As a student, practitioner, and patron of the fine arts, I long ago imbibed the sybaritic imploration that beauty and meaning drawn out of sensory stimulation were a significant source of enjoyment, a high calling even. Accordingly, learning to decode and appreciate the conventions of various forms of expression required effort, which was repaid and deepened over a lifetime of experience. I recognize that, because of their former close association with the European aristocracy and American moneyed class, the fine arts (Western genres) have never quite distanced themselves from charges of elitism. However, I’ve always rejected that perspective. Since the latter part of the 20th century, the fine arts have never been more available to people of all walks of life, as crowds at art galleries attest.

Beyond the fine arts, I also recognize that people have a choice of aesthetics. Maybe it’s the pageantry of sports (including the primal ferocity of combat sports); the gastronomic delight of a fine meal, liquor, or cigar; identification with a famous brand; the pampered lifestyles of the rich and famous, with their premium services, personal staffs, and entourages; the sound of a Harley-Davidson motorcycle or a 1970s American muscle car; the sartorial appointments of high fashion and couture; simple biophilia; the capabilities of a smartphone or other tech device; or the brutal rhetoric and racehorse politics of the campaign trail. Take your pick. In no way do I consider the choice of one aesthetic versus another equivalent. Differences of quality and intent are so obvious that any relativist claim asserting false equivalence ought to be dismissed out of hand. However, there is considerable leeway. One of my teachers summed up taste variance handily: “that’s why they make chocolate and vanilla.”

Beauty and meaning are not interchangeable, but they are often sloppily conflated. The meaning found in earnest striving and sacrifice is a quintessential substitute for beauty. Thus, we’re routinely instructed to honor our troops for their service. Patriotic holidays (Independence Day, Memorial Day, Veterans Day, and others) form a thematic group. Considering how the media reflexively valorizes (rarely deploring) acts of force and mayhem authorized and carried out by the state, and how the citizenry takes that instruction and repeats it, it’s fair to say that an aesthetic attaches to such activity. For instance, some remember (with varying degrees of disgust) news anchor Brian Williams waxing rhapsodic over the Syrian conflict. Perhaps Chris Hedges’ book War is a Force That Gives Us Meaning provides greater context. I haven’t read the book, but the title is awfully provocative, which some read as an encomium to war. Book jacket blurbs and reviews indicate more circumspect arguments drawn from Hedges’ experience as a war correspondent.

We’re currently in the so-called season of giving. No one can escape anymore marketing harangues about Black Friday, Small Business Saturday, and Cyber Monday that launch the season. None of those days have much integrity, not that they ever did, since they bleed into each other as retailers strain to get a jump on one or extend another. We’re a thoroughly consumer society, which is itself an aesthetic (maybe I should have written anesthetic). Purchasing decisions are made according to a choice of aesthetics: brand, features, looks, price, etc. An elaborate machinery of psychological prods and inducements has been developed over the decades to influence consumer behavior. (A subgenre of psychology also studies these influences and behaviors.) The same can be said of the shaping of consumer citizen opinion. While some resist being channeled into others’ prescribed thought worlds, the difficulty of maintaining truly original, independent thought in the face of a deluge of both reasonable and bad-faith influence makes succumbing nearly inevitable. Under such condition, one wonders if choice of aesthetic even really exists.