Archive for the ‘Culture’ Category

Some while back, Scott Adams (my general disdain for him noted but unexpanded, since I’m not in the habit of shitting on people), using his knowledge of hypnosis, began pushing the string selling the narrative that our Commander-in-Chief is cannily adept at the art of persuasion. I, for one, am persuaded by neither Adams nor 45 but must admit that many others are. Constant shilling for control of narratives by agents of all sorts could not be more transparent (for me at least), rendering the whole enterprise null. Similarly, when I see an advertisement (infrequently, I might add, since I use ad blockers and don’t watch broadcast TV or news programs), I’m rarely inclined to seek more information or make a purchase. Once in a long while, an ad creeps through my defenses and hits one of my interests, and even then, I rarely respond because, duh, it’s an ad.

In the embedded video below, Stuart Ewen describes how some learned to exploit a feature (not a bug) in human cognition, namely, appeals to emotion that overwhelm rational response. The most obvious, well-worn example is striking fear into people’s hearts and minds to convince them of an illusion of safety necessitating relinquishing civil liberties and/or fighting foreign wars.

The way Ewen uses the term consciousness differs from the way I use it. He refers specifically to opinion- and decision-making (the very things vulnerable to manipulation) rather than the more generalized and puzzling property of having an individual identity or mind and with it self-awareness. In fact, Ewen uses the terms consciousness industry and persuasion industry instead of public relations and marketing to name those who spin information and thus public discourse. At some level, absolutely everyone is guilty of seeking to persuade others, which again is a basic feature of communication. (Anyone negotiating the purchase of, say, a new or used car faces the persuasion of the sales agent with some skepticism.) What turns it into something maniacal is using lies and fabrication to advance agendas against the public interest, especially where public opinion is already clear.

Ewen also points to early 20th-century American history, where political leaders and marketers were successful in manipulating mass psychology in at least three ways: 1. drawing the pacifist U.S. public into two world wars of European origin, 2. transforming citizens into consumers, thereby saving capitalism from its inherently self-destructive endgame (creeping up on us yet again), and 3. suppressing emergent collectivism, namely, socialism. Of course, unionism as a collectivist institution still gained considerable strength but only within the larger context of capitalism, e.g., achieving the American Dream in purely financial terms.

So getting back to Scott Adams’ argument, the notion that the American public is under some form of mass hypnosis (persuasion) and that 45 is the master puppeteer is perhaps half true. Societies do sometimes go mad and fall under the spell of a mania or cult leader. But 45 is not the driver of the current episode, merely the embodiment. I wouldn’t say that 45 figured out anything because that awards too much credit to presumed understanding and planning. Rather, he worked out (accidentally and intuitively — really by default considering his job in 2016) that his peculiar self-as-brand could be applied to politics by treating it all as reality TV, which by now everyone knows is its own weird unreality the same way professional wrestling is fundamentally unreal. (The term political theater applies here.) He demonstrated a knack (at best) for keeping the focus firmly on himself and driving ratings (abetted by the mainstream media that had long regarded him as a clown or joke), but those objectives were never really in service of a larger political vision. In effect, the circus brought to town offers its own bizarre constructed narrative, but its principle characteristic is gawking, slack-jawed, made-you-look narcissism, not any sort of proper guidance or governance.

Advertisements

Have to admit, when I first saw this brief article about middle school kids being enrolled in mandatory firearms safety classes, my gut response was something sarcastic to the effect “No, this won’t end badly at all ….” Second thought (upon reading the headline alone) was that it had to be Texas. Now that I’ve calmed down some, both responses are no longer primary in my thinking.

I’ve written before about perception and function of guns of differing types. I daresay that little clarity has been achieved on the issue, especially because a gun is no longer understood as a tool (with all the manifold purposes that might entail) but is instead always a weapon (both offensive and defensive). The general assumption is that anyone brandishing a weapon (as in open carry) is preparing to use it imminently (so shoot first!). A corollary is that anyone merely owning a gun is similarly prepared but only in the early, hypothetical, or contingent stages. These are not entirely fair assumptions but demonstrate how our perception of the tool has frequently shifted over toward emotionalism.

My father’s generation may be among the last without specialized training (e.g., hunters and those having served in the military) who retain the sense of a gun being a tool, both of which still account for quite a lot of people. Periodic chain e-mails sometimes point out that students (especially at rural and collar county schools) used to bring guns to school to stow in their lockers for after-school use with Gun Club. I’d say “imagine doing that now” except that Iowa is doing just that, though my guess is that the guns are stored more securely than a student locker. Thus, exposure to gun safety/handling and target practice may remove some of the stigma assigned to the tool as well as teach students respect for the tool.

Personally, I’ve had limited exposure to guns and tend to default (unthinkingly, reflexively) to what I regard as a liberal/progressive left opinion that I don’t want to own a gun and that guns should be better regulated to stem gun violence. However, only a little circumspection is needed to puncture that one-size-fits-all bubble. And as with so many complicated issues of the day, it’s a little hard to know what to wish for or to presume that I have the wisdom to know better than others. Maybe Iowa has it right and this may not end so badly.

As I reread what I wrote 2.5 years ago in my first blog on this topic, I surmise that the only update needed to my initial assessment is a growing pile of events that demonstrate my thesis: our corrupted information environment is too taxing on human cognition, with the result that a small but growing segment of society gets radicalized (wound up like a spring) and relatively random individuals inevitably pop, typically in a self-annihilating gush of violence. News reports bear this out periodically, as one lone-wolf kook after another takes it upon himself (are there any examples of females doing this?) to shoot or blow up some target, typically chosen irrationally or randomly though for symbolic effect. More journalists and bloggers are taking note of this activity and evolving or resurrecting nomenclature to describe it.

The earliest example I’ve found offering nomenclature for this phenomenon is a blog with a single post from 2011 (oddly, no follow-up) describing so-called stochastic terrorism. Other terms include syntactic violence, semantic violence, and epistemic violence, but they all revolve around the same point. Whether on the sending or receiving end of communications, some individuals are particularly adept at or sensitive to dog whistles that over time activate and exacerbate tendencies toward radical ideology and violence. Wired has a brief article from a few days ago discussing stochastic terrorism as jargon, which is basically what I’m doing here. Admittedly, the last of these terms, epistemic violence (alternative: epistemological violence), ranges farther afield from the end effect I’m calling wind-up toys. For instance, this article discussing structural violence is much more academic in character than when I blogged on the same term (one of a handful of “greatest hits” for this blog that return search-engine hits with some regularity). Indeed, just about any of my themes and topics can be given a dry, academic treatment. That’s not my approach (I gather opinions differ on this account, but I insist that real academic work is fundamentally different from my armchair cultural criticism), but it’s entirely valid despite being a bit remote for most readers. One can easily get lost down the rabbit hole of analysis.

If indeed it’s mere words and rhetoric that transform otherwise normal people into criminals and mass murderers, then I suppose I can understand the distorted logic of the far Left that equates words and rhetoric themselves with violence, followed by the demand that they be provided with warnings and safe spaces lest they be triggered by what they hear, read, or learn. As I understand it, the fear is not so much that vulnerable, credulous folks will be magically turned into automatons wound up and set loose in public to enact violent agendas but instead that virulent ideas and knowledge (including many awful truths of history) might cause discomfort and psychological collapse akin to what happens to when targets of hate speech and death threats are reduced, say, to quivering agoraphobia. Desire for protection from harm is thus understandable. The problem with such logic, though, is that protections immediately run afoul of free speech, a hallowed but misunderstood American institution that preempts quite a few restrictions many would have placed on the public sphere. Protections also stall learning and truth-seeking straight out of the gate. And besides, preemption of preemption doesn’t work.

In information theory, the notion of a caustic idea taking hold of an unwilling person and having its wicked way with him or her is what’s called a mind virus or meme. The viral metaphor accounts for the infectious nature of ideas as they propagate through the culture. For instance, every once in a while, a charismatic cult emerges and inducts new members, a suicide cluster appears, or suburban housewives develop wildly disproportionate phobias about Muslims or immigrants (or worse, Muslim immigrants!) poised at their doorsteps with intentions of rape and murder. Inflaming these phobias, often done by pundits and politicians, is precisely the point of semantic violence. Everyone is targeted but only a few are affected to the extreme of acting out violently. Milder but still invalid responses include the usual bigotries: nationalism, racism, sexism, and all forms of tribalism, “othering,” or xenophobia that seek to insulate oneself safely among like folks.

Extending the viral metaphor, to protect oneself from infectious ideas requires exposure, not insulation. Think of it as a healthy immune system built up gradually, typically early in life, through slow, steady exposure to harm. The alternative is hiding oneself away from germs and disease, which has the ironic result of weakening the immune system. For instance, I learned recently that peanut allergies can be overcome by gradual exposure — a desensitization process — but are exacerbated by removal of peanuts from one’s environment and/or diet. This is what folks mean when they say the answer to hate speech is yet more (free) speech. The nasty stuff can’t be dealt with properly when it’s quarantined, hidden away, suppressed, or criminalized. Maybe there are exceptions. Science fiction entertains those dangers with some regularity, where minds are shunted aside to become hosts for invaders of some sort. That might be overstating the danger somewhat, but violent eruptions may provide some credence.

Several politicians on the U.S. national stage have emerged in the past few years as firebrands of new politics and ideas about leadership — some salutary, others less so. Perhaps the quintessential example is Bernie Sanders, who identified himself as Socialist within the Democratic Party, a tacit acknowledgement that there are no electable third-party candidates for high office thus far. Even 45’s emergence as a de facto independent candidate within the Republican Party points to the same effect (and at roughly the same time). Ross Perot and Ralph Nader came closest in recent U.S. politics to establishing viable candidacies outside the two-party system, but their ultimate failures only reinforce the rigidity of modern party politics; it’s a closed system.

Those infusing energy and new (OK, in truth, they’re old) ideas into this closed system are intriguing. By virtue of his immediate name/brand recognition, Bernie Sanders can now go by his single given name (same is true of Hillary, Donald, and others). Supporters of Bernie’s version of Democratic Socialism are thus known as Bernie Bros, though the term is meant pejoratively. Considering his age, however, Bernie is not widely considered a viable presidential candidate in the next election cycle. Among other firebrands, I was surprised to find Alexandria Ocasio-Cortez (often referred to simply as AOC) described in the video embedded below as a Democratic Socialist but without any reference to Bernie (“single-handedly galvanized the American people”):

Despite the generation(s) gap, young adults had no trouble supporting Bernie three years ago but appear to have shifted their ardent support to AOC. Yet Bernie is still relevant and makes frequent statements demonstrating how well he understands the failings of the modern state, its support of the status quo, and the cult of personality behind certain high-profile politicians.

As I reflect on history, it occurs to me that many of the major advances in society (e.g., abolition, suffrage, the labor movement, civil rights, equal rights and abortion, and the end of U.S. involvement in the Vietnam War) occurred not because our government led us to them but because the American people forced the issues. The most recent examples of the government yielding to the will of the people are gay marriage and cannabis/hemp legalization (still underway). I would venture that Johnson and Nixon were the last U.S. presidents who experienced palpable fear of the public. (Claims that Democrats are afraid of AOC ring hollow — so far.) As time has worn on, later presidents have been confident in their ability to buffalo the public or at least to use the power of the state to quell unrest (e.g., the Occupy movement). (Modern means of crowd control raise serious questions about the legitimacy of any government that would use them against its own citizens. I would include enemy combatants, but that is a separate issue.) In contrast with salutary examples of the public using its disruptive power over a recalcitrant government are arguably more examples where things went haywire rather badly. Looking beyond the U.S., the French Reign of Terror and the Bolsheviks are the two examples that leap immediately to mind, but there are plenty of others. The pattern appears to be a populist ideology that takes root and turns virulent and violent followed by consolidation of power by those who mange to survive the inevitable purge of dissidents.

I bring this up because we’re in a period of U.S. history characterized by populist ideological possession on both sides (left/right) of the political continuum, though politics ought to be better understood as a spectrum. Extremism has again found a home (or several), and although the early stages appear to be mild or harmless, I fear that a charismatic leader might unwittingly succeed in raising a mob. As the saying goes (from the Indiana Jones movie franchise), “You are meddling with forces you cannot possibly comprehend,” to which I would add cannot control. Positioning oneself at the head of a movement or rallying behind such an opportunist may feel like the right thing to do but could easily and quickly veer into wildly unintended consequences. How many times in history has that already occurred?

For a time after the 2008 financial collapse, skyscraper projects in Chicago came to a dead halt, mostly due to dried-up financing. My guess (since I don’t know with any reliability) is that much the same obtained worldwide. However, the game appears to be back on, especially in New York City, one of few cities around the globe where so-called “real money” tends to pool and collect. Visual Capitalist has an interesting infographic depicting changes to the NYC skyline every 20 years. The number of supertalls topping 1,000 feet expected by 2020 is quite striking.

Courtesy of Visual Capitalist

The accompanying text admits that NYC is left in the dust by China, specifically, the Pearl River Delta Megacity, which includes Hong Kong, Shenzhen, Macau, and others. As I’ve written before, the mad rush to build (earning ridiculous, absurd, imaginary prestige points awarded by and to exactly no one) takes no apparent notice of a slo-mo crack-up in the way modern societies organize and fund themselves. The new bear market might give one … um, pause.

Also left in the dust is Chicago, home of the original skyscraper. Since the 2008 collapse, Chicago’s most ambitious project, the ill-fated Chicago Spire (a/k/a the Fordham Spire) was abandoned despite a big hole dug in the ground and some foundation work completed. An absence of completed prestige projects since 2008 means Chicago has been lapped several times over by NYC, not that anyone is counting. The proposed site of the Chicago Spire is too enticing, however — just inside Lake Shore Drive at the mouth of the Chicago River — for it to be dormant for long. Indeed, a press release last year (escaped my attention at the time) announced redevelopment of the site, and a slick website is operating for now (linked in the past to similar sites that went abandoned along with their subject projects). Also reported late last year, Chicago appears to have rejoined the game in earnest, with multiple projects already under construction and others in the planning/approval phases.

So if hiatus was called the last time we crashed financially (a regular occurrence, I note), it seems we’ve called hiatus on the hiatus and are back in a mad, futile race to remake modernity into gleaming vertical cities dotting the globe. Such hubris and exuberance might be intoxicating to technophiles, but I’m reminded of a observation (can’t locate a quote, sorry) to the effect that civilizations’ most extravagant projects are undertaken just before their collapses. Our global civilization is no different.

I’ve written a different form of this blog post at least once before, maybe more. Here’s the basic thesis: the bizarro unreality of the world in which we now live is egregious enough to make me wonder if we haven’t veered wildly off the path at some point and now exist within reality prime. I suppose one can choose any number of historical inflections to represent the branching point. For me, it was the reelection of George W. Bush in 2004. (The 9/11 attacks and “wars” in Afghanistan and Iraq had already occurred or commenced by then, and it had already revealed as well that lies — Saddam had WMDs — that sold the American public on the Iraq “war” were effective and remain so today.) Lots of other events changed the course of history, but none other felt as much to me like a gut punch precisely because, in the case of the 2004 presidential election, we chose our path. I fantasized waking up from my reality-prime nightmare but eventually had to grudgingly accept that if multiverses exist, ours mine had become one where we chose (collectively, and just barely) to keep in office an executive who behaved like a farce of stupidity. Well, joke’s on us. Twelve years later, we chose someone even more stupid, though with a “certain serpentine cunning,” and with arguably the worst character of any U.S. executive in living history.

So what to do in the face of this dysfunctional state of affairs? Bret Weinstein below has ideas. (As usual, I’m quite late, embedding a video that by Internet standards is already ancient. I also admit this is equivalent to a smash cut because I don’t have a particularly good transition or justification for turning so suddenly to Weinstein.) Weinstein is an evolutionary biologist, so no surprise that the approach he recommends is borne out of evolutionary thinking. In fairness, a politician would logically recommend political solutions, a financier would recommend economic solutions, and other professionals would seek solutions from within their areas of expertise.

The title of the interview is “Harnessing Evolution,” meaning Weinstein suggests we use evolutionary models to better understand our own needs and distortions to guide or plot proper path(s) forward and get back on track. Never mind that a healthy minority of the U.S. public rejects evolution outright while an additional percentage takes a hybrid stance. While I’m impressed that Weinstein has an answer for everything (pedagogue or demagogue or both?) and has clearly thought through sociopolitical issues, I daresay he’s living in reality double-prime if he thinks science education can be a panacea for what ails us. My pessimism is showing.

For ambulatory creatures, vision is arguably the primary sense of the five (main) senses. Humans are among those species that stand upright, facilitating a portrait orientation when interacting among ourselves. The terrestrial environment on which we live, however, is in landscape (as distinguished from the more nearly 3D environments of birds and insects in flight or marine life in rivers, lakes, seas, and oceans). My suspicion is that modest visual conflict between portrait and landscape is among the dynamics that give rise to the orienting response, a step down from the startle reflex, that demands full attention when visual environments change.

I recall reading somewhere that wholesale changes in surroundings, such as when crossing a threshold, passing through a doorway, entering or exiting a tunnel, and notably, entering and exiting an elevator, trigger the orienting response. Indeed, the flush of disorientation before one gets his or her bearings is tantamount to a mind wipe, at least momentarily. This response may also help to explain why small, bounded spaces such as interiors of vehicles (large and small) in motion feel like safe, contained, hermetically sealed personal spaces. We orient visually and kinesthetically at the level of the interior, often seated and immobile, rather than at the level of the outer landscape being traversed by the vehicle. This is true, too, of elevators, a modern contraption that confounds the nervous system almost as much as revolving doors — particularly noticeable with small children and pets until they become habituated to managing such doorways with foreknowledge of what lies beyond.

The built environment has historically included transitional spaces between inner and outer environments. Churches and cathedrals include a vestibule or narthex between the exterior door and inner door leading to the church interior or nave. Additional boundaries in church architecture mark increasing levels of hierarchy and intimacy, just as entryways of domiciles give way to increasingly personal spaces: parlor or sitting room, living room, dining room, kitchen, and bedroom. (The sheer utility of the “necessary” room defies these conventions.) Commercial and entertainment spaces use lobbies, atria, and prosceniums in similar fashion.

What most interests me, however, is the transitional space outside of buildings. This came up in a recent conversation, where I observed that local school buildings from the early to middle part of the 20th century have a distinguished architecture set well back from the street where lawns, plazas, sidewalks, and porches leading to entrances function as transitional spaces and encourage social interaction. Ample window space, columnar entryways, and roof embellishments such as dormers, finials, cupolas, and cornices add style and character befitting dignified public buildings. In contrast, 21st-century school buildings in particular and public buildings in general, at least in the city where I live, tend toward porchless big-box warehouses built right up to the sidewalk, essentially robbing denizens of their social space. Blank, institutional walls forbid rather than invite. Consider, for example, how students gathered in a transitional space are unproblematic, whereas those congregated outside a school entrance abutting a narrow sidewalk suggest either a gauntlet to be run or an eruption of violence in the offing. (Or maybe they’re just smoking.) Anyone forced to climb past loiterers outside a commercial establishment experiences similar suspicions and discomforts.

Beautifully designed and constructed public spaces of yore — demonstrations of a sophisticated appreciation of both function and intent — have fallen out of fashion. Maybe they understood then how transitional spaces ease the orientation response, or maybe they only intuited it. Hard to say. Architectural designs of the past acknowledged and accommodated social functions and sophisticated aesthetics that are today actively discouraged except for pointless stunt architecture that usually turns into boondoggles for taxpayers. This has been the experience of many municipalities when replacing or upgrading schools, transit centers, sports arenas, and public parks. Efficient land use today drives toward omission of transitional space. One of my regular reads is James Howard Kunstler’s Eyesore of the Month, which profiles one architectural misfire after the next. He often mocks the lack of transitional space, or when present, observes its open hostility to pedestrian use, including unnecessary obstacles and proximity to vehicular traffic (noise, noxious exhaust, and questionable safety) discouraging use. Chalk this up as another collapsed art (e.g., painting, music, literature, and poetry) so desperate to deny the past and establish new aesthetics that it has ruined itself.

As a student, practitioner, and patron of the fine arts, I long ago imbibed the sybaritic imploration that beauty and meaning drawn out of sensory stimulation were a significant source of enjoyment, a high calling even. Accordingly, learning to decode and appreciate the conventions of various forms of expression required effort, which was repaid and deepened over a lifetime of experience. I recognize that, because of their former close association with the European aristocracy and American moneyed class, the fine arts (Western genres) have never quite distanced themselves from charges of elitism. However, I’ve always rejected that perspective. Since the latter part of the 20th century, the fine arts have never been more available to people of all walks of life, as crowds at art galleries attest.

Beyond the fine arts, I also recognize that people have a choice of aesthetics. Maybe it’s the pageantry of sports (including the primal ferocity of combat sports); the gastronomic delight of a fine meal, liquor, or cigar; identification with a famous brand; the pampered lifestyles of the rich and famous, with their premium services, personal staffs, and entourages; the sound of a Harley-Davidson motorcycle or a 1970s American muscle car; the sartorial appointments of high fashion and couture; simple biophilia; the capabilities of a smartphone or other tech device; or the brutal rhetoric and racehorse politics of the campaign trail. Take your pick. In no way do I consider the choice of one aesthetic versus another equivalent. Differences of quality and intent are so obvious that any relativist claim asserting false equivalence ought to be dismissed out of hand. However, there is considerable leeway. One of my teachers summed up taste variance handily: “that’s why they make chocolate and vanilla.”

Beauty and meaning are not interchangeable, but they are often sloppily conflated. The meaning found in earnest striving and sacrifice is a quintessential substitute for beauty. Thus, we’re routinely instructed to honor our troops for their service. Patriotic holidays (Independence Day, Memorial Day, Veterans Day, and others) form a thematic group. Considering how the media reflexively valorizes (rarely deploring) acts of force and mayhem authorized and carried out by the state, and how the citizenry takes that instruction and repeats it, it’s fair to say that an aesthetic attaches to such activity. For instance, some remember (with varying degrees of disgust) news anchor Brian Williams waxing rhapsodic over the Syrian conflict. Perhaps Chris Hedges’ book War is a Force That Gives Us Meaning provides greater context. I haven’t read the book, but the title is awfully provocative, which some read as an encomium to war. Book jacket blurbs and reviews indicate more circumspect arguments drawn from Hedges’ experience as a war correspondent.

We’re currently in the so-called season of giving. No one can escape anymore marketing harangues about Black Friday, Small Business Saturday, and Cyber Monday that launch the season. None of those days have much integrity, not that they ever did, since they bleed into each other as retailers strain to get a jump on one or extend another. We’re a thoroughly consumer society, which is itself an aesthetic (maybe I should have written anesthetic). Purchasing decisions are made according to a choice of aesthetics: brand, features, looks, price, etc. An elaborate machinery of psychological prods and inducements has been developed over the decades to influence consumer behavior. (A subgenre of psychology also studies these influences and behaviors.) The same can be said of the shaping of consumer citizen opinion. While some resist being channeled into others’ prescribed thought worlds, the difficulty of maintaining truly original, independent thought in the face of a deluge of both reasonable and bad-faith influence makes succumbing nearly inevitable. Under such condition, one wonders if choice of aesthetic even really exists.

From time to time, I admit that I’m in no position to referee disputes, usually out of my lack of technical expertise in the hard sciences. I also avoid the impossibility of policing the Internet, assiduously pointing out error where it occurs. Others concern themselves with correcting the record and/or reinterpreting argument with improved context and accuracy. However, once in a while, something crosses my desk that gets under my skin. An article by James Ostrowski entitled “What America Has Done To its Young People is Appalling,” published at LewRockwell.com, is such a case. It’s undoubtedly a coincidence that the most famous Rockwell is arguably Norman Rockwell, whose celebrated illustrations for the Saturday Evening Post in particular helped reinforce a charming midcentury American mythology. Lew Rockwell, OTOH, is described briefly at the website’s About blurb:

The daily news and opinion site LewRockwell.com was founded in 1999 by anarcho-capitalists Lew Rockwell … and Burt Blumert to help carry on the anti-war, anti-state, pro-market work of Murray N. Rothbard.

Those political buzzwords probably deserve some unpacking. However, that project falls outside my scope. In short, they handily foist blame for what ills us in American culture on government planning, as distinguished from the comparative freedom of libertarianism. Government earns its share of blame, no doubt, especially with its enthusiastic prosecution of war (now a forever war); but as snapshots of competing political philosophies, these buzzwords are reductive almost to the point of meaninglessness. Ostrowski lays blame more specifically on feminism and progressive big government and harkens back to an idyllic 1950s nuclear family fully consonant with Norman Rockwell’s illustrations, thus invoking the nostalgic frame.

… the idyllic norm of the 1950’s, where the mother typically stayed home to take care of the kids until they reached school age and perhaps even long afterwards, has been destroyed.  These days, in the typical American family, both parents work fulltime which means that a very large percentage of children are consigned to daycare … in the critical first five years of life, the vast majority of Americans are deprived of the obvious benefits of growing up in an intact family with the mother at home in the pre-school years. We baby boomers took this for granted. That world is gone with the wind. Why? Two main reasons: feminism and progressive big government. Feminism encouraged women to get out of the home and out from under the alleged control of husbands who allegedly controlled the family finances.

Problem is, 1950s social configurations in the U.S. were the product of a convergence of historical forces, not least of which were the end of WWII and newfound American geopolitical and economic prominence. More pointedly, an entire generation of young men and women who had deferred family life during perilous wartime were then able to marry, start families, and provide for them on a single income — typically that of the husband/father. That was the baby boom. Yet to enjoy the benefits of the era fully, one probably needed to be a WASPy middle-class male or the child of one. Women and people of color fared … differently. After all, the 1950s yielded to the sexual revolution and civil rights era one decade later, both of which aimed specifically to improve the lived experience of, well, women and people of color.

Since the 1950s were only roughly 60 years ago, it might be instructive to consider how life was another 60 years before then, or in the 1890s. If one lived in an eastern American city, life was often a Dickensian dystopia, complete with child labor, poorhouses, orphanages, asylums, and unhygienic conditions. If one lived in an agrarian setting, which was far more prevalent before the great 20th-century migration to cities, then life was frequently dirt-poor subsistence and/or pioneer homesteading requiring dawn-to-dusk labor. Neither mode yet enjoyed social planning and progressive support including, for example, sewers and other modern infrastructure, public education, and economic protections such as unionism and trust busting. Thus, 19th-century America might be characterized fairly as being closer to anarcho-capitalism than at any time since. One of its principal legacies, one must be reminded, was pretty brutal exploitation of (and violence against) labor, which can be understood by the emergence of political parties that sought to redress its worst scourges. Hindsight informs us now that reforms were slow, partial, and impermanent, leading to the observation that among all tried forms of self-governance, democratic capitalism can be characterized as perhaps the least awful.

So yeah, the U.S. came a long way from 1890 to 1950, especially in terms of standard of living, but may well be backsliding as the 21st-century middle class is hollowed out (a typical income — now termed household income — being rather challenging for a family), aspirations to rise economically above one’s parents’ level no longer function, and the culture disintegrates into tribal resentments and unrealistic fantasies about nearly everything. Ostrowski marshals a variety of demographic facts and figures to support his argument (with which I agree in large measure), but he fails to make a satisfactory causal connection with feminism and progressivism. Instead, he sounds like 45 selling his slogan Make America Great Again (MAGA), meaning let’s turn back the clock to those nostalgic 1950s happy days. Interpretations of that sentiment run in all directions from innocent to virulent (but coded). By placing blame on feminism and progressivism, it’s not difficult to hear anyone citing those putative causes as an accusation that, if only those feminists and progressives (and others) had stayed in their assigned lanes, we wouldn’t be dealing now with cultural crises that threaten to undo us. What Ostrowski fails to acknowledge is that despite all sorts of government activity over the decades, no one in the U.S. is steering the culture nearly as actively as in centrally planned economies and cultures, current and historical, which in their worst instances are fascist and/or totalitarian. One point I’ll agree on, however, just to be charitable, is that the mess we’ve made and will leave to youngsters is truly appalling.

The largest lottery jackpot ever (roughly $1.6 billion) was won last week by some lucky or unlucky soul, depending. The mainstream media promoted this possible windfall relentlessly, instructing everyone as possible winners the first steps to take with the winning ticket. It prompts the question, What Would a (sudden, new) Billionaire Do? with all that money, and many of us toyed with the prospect actively. The ruinous appeal is far too seductive to put out of mind entirely. Lottery winners, however, are not in the same class as the world’s billionaires, whose fortunes are closely associated with capitalist activity. Topping the list is Jeff Bezos of Amazon. The Walmart fortune deposits four Walton family members on the list, whose combined wealth exceeds even that of Bezos. Beyond conjecture what billionaires should or might do besides the billionaire challenge or purchasing land in New Zealand for boltholes to leave the rest of us behind, it’s worth pointing out how such extraordinary wealth was amassed in the first place, because it surely doesn’t happen passively.

Before Amazon and Walmart but well after the robber barons of the early 20th century, McDonald’s was the ubiquitous employer offering dead-end, entry-level jobs that churned through people (labor) before discarding them carelessly, all the while locking up profits the placard “millions [then billions] sold!” Its hallmark euphemism (still in use) is the McJob. After McDonald’s, Walmart was widely understood as the worst employer in the world in terms of transfer of obscene wealth to the top while rank-and-file workers struggle below the poverty line. Many Walmart employees are still so poorly compensated that they qualify for government assistance, which effectively functions as a government subsidy to Walmart. Walmart’s awful labor practices, disruption of local mom-and-pop economies, and notorious squeezing of suppliers by virtue of its sheer market volume established the template for others. For instance, employers emboldened by insecure or hostage labor adopt hard-line policies such as firing employees who fail to appear at work in the midst of a hurricane or closing franchise locations solely to disallow labor organizing. What Walmart pioneered Amazon has refined. Its fulfillment-center employees have been dubbed CamperForce for being made primarily of older people living in vans and campers and deprived of meaningful alternatives. Jessica Bruder’s new book Nomadland (2018), rather ironically though shamelessly and predictably sold by Amazon, provides sorry description, among other things, of how the plight of the disenfranchised is repackaged and sold back them. As a result of severe criticism (not stemming directly from the book), Amazon made news earlier this month by raising its minimum wage to $15 per hour, but it remains to be seen if offsetting cuts to benefits wipe out apparent labor gains.

These business practices are by no means limited to a few notoriously bad corporations or their billionaire owners. As reported by the Economic Policy Institute and elsewhere, income inequality has been rising for decades. The graph below shows that wage increases have been entirely disproportionate, rewarding the top 10 percent, top 1 percent, and top 0.1 percent at increasingly absurd levels compared to the remaining 90 percent.

157228-20055

It’s a reverse Robin Hood situation: the rich taking from not just the poor but everyone and giving to themselves. Notably, trickle-down economics has been widely unmasked as a myth but nonetheless remains a firmly entrenched idea among those who see nothing wrong with, say, ridiculous CEO pay precisely because they hope to eventually be counted among those overcompensated CEOs (or lottery winners) and so preserve their illusory future wealth. Never mind that the entire economic system is tilted egregiously in favor a narrow class of predatory plutocrats. Actual economic results (minus all the rhetoric) demonstrate that as a function of late-stage capitalism, the ultrarich, having already harvested all the low-hanging fruit, has even gone after middle-class wealth as perhaps the last resource to plunder (besides the U.S. Treasury itself, which was looted with the last series of bailouts).

So what would a billionaire do in the face of this dynamic? Bezos is the new poster boy, a canonical example, and he shows no inclination to call into question the capitalist system that has rewarded him so handsomely. Even as he gives wage hikes, he takes away other compensation, keeping low-level employees in a perpetual state of doubt as to when they’ll finally lose what’s left to them before dying quietly in a van down by the river or out in the desert somewhere. Indeed, despite the admirable philanthropy of some billionaires (typically following many years of cutthroat activity to add that tenth and eleventh digit), structural change necessary to restore the middle class, secure the lower class with a living wage, and care for the long-term unemployed, permanently unemployable, and disabled (estimated to be at least 10% of the population) are nowhere on the horizon. Those in the best position to undertake such change just keep on building their wealth faster than everyone else, forsaking the society that enables them and withdrawing into armed compounds insulated from the rabble. Hardly a life most of us would desire if we knew in advance what a corrupting prison it turns out to be.