Archive for the ‘Idealism’ Category

Returning to Pankaj Mishra’s The Age of Anger, chapter 2 (subtitled “Progress and its Contradictions”) profiles two writers of the 18th-century Enlightenment: François-Marie Arouet (1694–1778), better known by his nom de plume Voltaire, and Jean-Jacques Rousseau (1712–1778). Voltaire was a proponent and embodiment of Enlightenment values and ethics, whereas Rousseau was among the primary critics. Both were hugely influential, and the controversy inherent in their relative perspectives is unresolved even today. First come Rousseau’s criticisms (in Mishra’s prose):

… the new commercial society, which was acquiring its main features of class divisions, inequality and callous elites during the eighteenth century, made its members corrupt, hypocritical and cruel with its prescribed values of wealth, vanity and ostentation. Human beings were good by nature until they entered such a society, exposing themselves to ceaseless and psychologically debilitating transformation and bewildering complexity. Propelled into an endless process of change, and deprived of their peace and stability, human beings failed to be either privately happy or active citizens [p. 87]

This assessment could easily be mistaken for a description of the 1980s and 90s: ceaseless change and turmoil as new technological developments (e.g., the Internet) challenged everyone to reorient and reinvent themselves, often as a brand. Cultural transformation in the 18th century, however, was about more than just emerging economic reconfigurations. New, secular, free thought and rationalism openly challenged orthodoxies formerly imposed by religious and political institutions and demanded intellectual and entrepreneurial striving to participate meaningfully in charting new paths for progressive society purportedly no longer anchored statically in the past. Mishra goes on:

It isn’t just that the strong exploit the weak; the powerless themselves are prone to enviously imitate the powerful. But people who try to make more of themselves than others end up trying to dominate others, forcing them into positions of inferiority and deference. The lucky few on top remain insecure, exposed to the envy and malice of the also-rans. The latter use all means available to them to realize their unfulfilled cravings while making sure to veil them with a show of civility, even benevolence. [p. 89]

Sounds quite contemporary, no? Driving the point home:

What makes Rousseau, and his self-described ‘history of the human heart’, so astonishingly germane and eerily resonant is that, unlike his fellow eighteenth-century writers, he described the quintessential inner experience of modernity for most people: the uprooted outsider in the commercial metropolis, aspiring for a place in it, and struggling with complex feelings of envy, fascination, revulsion and rejection. [p. 90]

While most of the chapter describes Rousseau’s rejection and critique of 18th-century ethics, Mishra at one point depicts Rousseau arguing for instead of against something:

Rousseau’s ideal society was Sparta, small, harsh, self-sufficient, fiercely patriotic and defiantly un-cosmopolitan and uncommercial. In this society at least, the corrupting urge to promote oneself over others, and the deceiving of the poor by the rich, could be counterpoised by the surrender of individuality to public service, and the desire to seek pride for community and country. [p. 92]

Notably absent from Mishra’s profile is the meme mistakenly applied to Rousseau’s diverse criticism: the noble savage. Rousseau praises provincial men (patriarchal orientation acknowledged) largely unspoilt by the corrupting influence of commercial, cosmopolitan society devoted to individual self-interest and amour propre, and his ideal (above) is uncompromising. Although Rousseau had potential to insinuate himself successfully in fashionable salons and academic posts, his real affinity was with the weak and downtrodden — the peasant underclass — who were mostly passed over by rapidly modernizing society. Others managed to raise their station in life above the peasantry to join the bourgeoisie (disambiguation needed on that term). Mishra’s description (via Rousseau) of this middle and upper middle class group provided my first real understanding of popular disdain many report toward bourgeois values using the derisive term bourgie (clearer when spoken than when written).

Profile of Voltaire to follow in part 2.

Advertisements

Apologies for this overlong blog post. I know that this much text tries the patience of most readers and is well in excess of my customary 3–4 paragraphs.

Continuing my book blogging of Pankaj Mishra’s Age of Anger, Chapter Two (subtitled “History’s Winners and Their Illusions”) focuses on the thought revolution that followed from the Enlightenment in Western Europe and its imitation in non-Western cultures, especially as manifested in the century leading to the French Revolution. Although the American Revolution (more narrowly a tax revolt with insistence on self-rule) preceded the French Revolution by slightly more than a decade, it’s really the French, whose motto liberté, égalité, fraternité came to prominence and defined an influential set of European values, who effectively challenged enthusiastic modernizers around the globe to try to catch up with the ascendant West.

However, almost as soon as this project appeared, i.e., attempting to transform ancien régime monarchies in Northern Africa, the Middle East, and Russia into something pseudo-European, critics arose who denounced the abandonment of tradition and centuries-old national identities. Perhaps they can be understood as the first wave of modern conservatism. Here is Mishra’s characterization:

Modernization, mostly along capitalist lines, became the universalist creed that glorified the autonomous rights-bearing individual and hailed his rational choice-making capacity as freedom. Economic growth was posited as the end-all of political life and the chief marker of progress worldwide, not to mention the gateway to happiness. Communism was totalitarian. Ergo its ideological opponent, American liberalism, represented freedom, which in turn was best advanced by moneymaking. [p. 48]

Aside: The phrase “rights-bearing individual” has obvious echoes with today’s SJWs and their poorly conceived demand for egalitarianism not just before the law but in social and economic outcomes. Although economic justice (totally out of whack with today’s extreme income and wealth inequality) is a worthy goal that aligns with idealized but not real-world Enlightenment values, SJW activism reinforces retrograde divisions of people based on race, gender, sexual orientation, religion, disability, etc. Calls to level out all these questionable markers of identity have resulted in intellectual confusion and invalidation of large “privileged” and/or “unoppressed” groups such as white males of European descent in favor of oppressed minorities (and majorities, e.g., women) of all categories. Never mind that many of those same white males are often every bit as disenfranchised as others whose victimhood is paraded around as some sort virtue granting them authority and preferential treatment.

Modernization has not been evenly distributed around the globe, which accounts for countries even today being designated either First, Second, or Third World. An oft-used euphemism is “developing economy,” which translates to an invitation for wealthy First-World nations (or its corporations) to force their way in to exploit cheap labor and untapped natural resources. Indeed, as Mishra points out, the promise of joining First-World living standards (having diverged centuries ago) is markedly hollow:

… doubters of Western-style progress today include more than just marginal communities and some angry environmental activists. In 2014 The Economist said that, on the basis of IMF data, emerging economies — or, most of the human population — might have to wait for three centuries in order to catch up with the West. In this assessment, the last decade of high growth was an ‘aberration’ and ‘billions of people will be poorer for a lot longer than they might have expected just a few years ago’.

The implications are sobering: the non-West not only finds itself replicating the West’s trauma on an infinitely larger scale. While helping inflict the profoundest damage yet on the environment — manifest today in rising sea levels, erratic rainfall, drought, declining harvests, and devastating floods — the non-West also has no real prospect of catching up … [pp. 47-48]

That second paragraph is an unexpected acknowledgement that the earliest industrialized nations (France, the United Kingdom, and the U.S.) unwittingly put us on a path to self-annihilation only to be knowingly repeated and intensified by latecomers to industrialization. All those (cough) ecological disturbances are occurring right now, though the public has been lulled into complacency by temporary abundance, misinformation, under- and misreporting, and international political incompetence. Of course, ecological destruction is no longer merely the West’s trauma but a global catastrophe of the highest magnitude which is certainly in the process of catching up to us.

Late in Chapter Two, Mishra settles on the Crystal Palace exhibition space and utopian symbol, built in 1851 during the era of world’s fairs and mistaken enthusiasm regarding the myth of perpetual progress and perfectibility, as an irresistible embodiment of Western hubris to which some intellectual leaders responded with clear disdain. Although a marvelous technical feat of engineering prowess and demonstration of economic power (not unlike countries that host the Olympics — remember Beijing?), the Crystal Palace was also viewed as an expression of the sheer might of Western thought and its concomitant products. Mishra repeatedly quotes Dostoevsky, who visited the Crystal Palace in 1862 and described his visceral response to the place poignantly and powerfully:

You become aware of a colossal idea; you sense that here something has been achieved, that here there is victory and triumph. You even begin vaguely to fear something. However independent you may be, for some reason you become terrified. ‘For isn’t this the achievement of perfection?’ you think. ‘Isn’t this the ultimate?’ Could this in fact be the ‘one fold?’ Must you accept this as the final truth and forever hold your peace? It is all so solemn, triumphant, and proud that you gasp for breath. [p. 68]

And later, describing the “world-historical import” of the Crystal Palace:

Look at these hundreds of thousands, these millions of people humbly streaming here from all over the face of the earth. People come with a single thought, quietly, relentlessly, mutely thronging onto this colossal palace; and you feel that something final has taken place here, that something has come to an end. It is like a Biblical picture, something out of Babylon, a prophecy from the apocalypse coming to pass before your eyes. You sense that it would require great and everlasting spiritual denial and fortitude in order not to submit, not to capitulate before the impression, not to bow to what is, and not to deify Baal, that is not to accept the material world as your ideal. [pp. 69–70]

The prophetic finality of the Crystal Palace thus presaged twentieth-century achievements and ideas (the so-called American Century) that undoubtedly eclipsed the awesome majesty of the Crystal Palace, e.g., nuclear fission and liberal democracy’s purported victory over Soviet Communism (to name only two). Indeed, Mishra begins the chapter with a review of Americans declarations of the end of history, i.e., having reached final forms of political, social, and economic organization that are now the sole model for all nations to emulate. The whole point of the chapter is that such pronouncements are illusions with strong historical antecedents that might have cautioned us not to leap to unwarranted conclusions or to perpetuate a soul-destroying regime hellbent on extinguishing all alternatives. Of course, as Gore Vidal famously quipped, “Americans never learn; it’s part of our charm.”

 

Third version of this topic. Whereas the previous two were about competing contemporary North American ways of knowing, this one is broader in both time and space.

The May 2019 issue of Harper’s Magazine has a fascinating review of Christina Thompson’s book Sea People: The Puzzle of Polynesia (2019). Beyond the puzzle itself — how did Polynesian people migrate to, settle, and populate the far-flung islands of the Central and South Pacific? — the review hits upon one of my recurring themes on this blog, namely, that human cognition is plastic enough to permit highly divergent ways of knowing.

The review (and book?) is laden with Eurocentric detail about the “discovery” of closely related Polynesian cultures dispersed more widely (geographically) than any other culture prior to the era of mass migration. Indeed, the reviewer chides the author at one point for transforming Polynesia from a subject in its own right into an exotic object of (Western) fascination. This distorted perspective is commonplace and follows from the earlier “discovery” and colonization of North America as though it were not already populated. Cartographers even today are guilty of this Eurocentrism, relegating “empty” expanses of the Pacific Ocean to irrelevance in maps when in fact the Pacific is “the dominant feature of the planet” and contains roughly twenty-five thousand islands (at current sea level? — noting that sea level was substantially lower during the last ice age some 13,000 years but due to rise substantially by the end of this century and beyond, engulfing many of the islands now lying dangerously close to sea level). Similar distortions are needed to squash the spherical (3D) surface of the globe onto planar (2D) maps (e.g., the Mercator projection, which largely ignores the Pacific Ocean in favor of continents; other projections shown here) more easily conceptualized (for Westerners) in terms of coordinate geometry using latitude and longitude (i.e., the Cartesian plane).

The review mentions the familiar dichotomy of grouping a hammer, saw, hatchet, and log in terms of abstract categories (Western thought) vs. utility or practicality (non-Western). Exploration of how different ways of knowing manifest is, according to the review, among the more intellectually exciting parts of the book. That’s the part I’m latching onto. For instance, the review offers this:

Near the middle of Sea People, Thompson explores the ramification of Polynesia as, until contact, an oral culture with “an oral way of seeing.” While writing enables abstraction, distancing, and what we generally call objectivity, the truth of oral cultures is thoroughly subjective. Islands aren’t dots on a map seen from the sky but destinations one travels to in the water.

This is the crux of the puzzle of Polynesians fanning out across the Pacific approximately one thousand years ago. They had developed means of wayfinding in canoes and outriggers without instruments or maps roughly 500 years prior to Europeans crossing the oceans in sailing ships. Perhaps I’m reading too much into the evidence, but abstraction and objectivity as a particular way of knowing, bequeathed to Western Europe via the Enlightenment and development of the scientific method, stunted or delayed exploration of the globe precisely because explorers began with a god’s eye view of the Earth from above rather than from the surface (object vs. subject). In contrast, quoting here from the book rather than the review, Polynesians used

a system known as etak, in which they visualize a “reference island,” — which is usually a real island but may also be imaginary — off to one side of the path they are following, about midway between their starting point and their destination. As the journey progresses, this island “moves” under each of the stars in the star path [situated near the horizon rather than overhead], while the canoe in which the voyagers are traveling stays still. Of course, the navigators know that it is the canoe and not the islands that are moving, but this is the way they conceptualize the voyage.

Placing oneself at the center of the world or universe — at least for the purpose of navigation — is a conceptual pose Westerners discarded when heliocentrism gradually replaced geocentrism. (Traveling using GPS devices ironically places the traveler back at the center of the map with terrain shifting around the vehicle, but it’s a poor example of wayfinding precisely because the traveler fobs the real work onto the device and likely possesses no real understanding or skill traversing the terrain besides following mechanical instructions.) While we Westerners might congratulate ourselves for a more accurate, objective orientation to the stars, its unwitting limitations are worth noting. Recent discoveries regarding human prehistory, especially megalithic stone construction accomplished with techniques still unknown and flatly impossible with modern technology, point to the existence of other ways of knowing lost to contemporary human cultures steadily triangulating on and conforming to Western thought (through the process of globalization). Loss of diversity of ways of knowing creates yet another sort of impoverishment that can only be barely glimpsed since most of us are squarely inside the bubble. Accordingly, it’s not for nothing that some unusually sensitive critics of modernity suggest we’re entering a new Dark Age.

 

I put aside Harari’s book from the previous blog post in favor of Pankaj Mishra’s Age of Anger: A History of the Present (2017). Mishra’s sharp cultural criticism is far more convincing than Harari’s Panglossian perspective. Perhaps some of that is due to an inescapable pessimism in my own character. Either way, I’ve found the first 35 pages dense with observations of interest to me as a blogger and armchair cultural critic. Some while back, I published a post attempting to delineate (not very well, probably) what’s missing in the modern world despite its obvious material abundance. Reinforcing my own contentions, Mishra’s thesis (as I understand it so far) is this: we today share with others post-Enlightenment an array of resentments and hatreds (Fr.: ressentiment) aimed incorrectly at scapegoats for political and social failure to deliver the promises of progressive modernity equitably. For instance, Mishra describes

… flamboyant secular radicals in the nineteenth and early twentieth centuries: the aesthetes who glorified war, misogyny and pyromania; the nationalists who accused Jews and liberals of rootless cosmopolitanism and celebrated irrational violence; and the nihilists, anarchists and terrorists who flourished in almost every continent against a background of cosy political-financial alliances, devastating economic crises and obscene inequalities. [pp. 10–11]

Contrast and/or compare his assessment of the recent past:

Beginning in the 1990s, a democratic revolution of aspiration … swept across the world, sparking longings for wealth, status and power, in addition to ordinary desires for stability and contentment, in the most unpromising circumstances. Egalitarian ambition broke free of old social hierarchies … The culture of [frantic] individualism went universal … The crises of recent years have uncovered an extensive failure to realize the ideals of endless economic expansion and private wealth creation. Most newly created ‘individuals’ toil within poorly imagined social and political communities and/or states with weakening sovereignty … individuals with very different pasts find themselves herded by capitalism and technology into a common present, where grossly unequal distributions of wealth and power have created humiliating new hierarchies. This proximity … is rendered more claustrophobic by digital communications … [S]hocks of modernity were once absorbed by inherited social structures of family and community, and the state’s welfare cushions [something mentioned here, too]. Today’s individuals are directly exposed to them in an age of accelerating competition on uneven playing fields, where it is easy to feel that there is no such thing as either society or state, and that there is only a war of all against all. [pp. 12–14]

These long quotes (the second one cut together from longer paragraphs) are here because Mishra is remarkably eloquent in his diagnosis of globalized culture. Although I’ve only read the prologue, I expect to find support for my long-held contention that disorienting disruptions of modernity (using Anthony Giddens’ sociological definition rather than the modish use of the term Postmodern to describe only the last few decades) create unique and formidable challenges to the formation of healthy self-image and personhood. Foremost among these challenges is an unexpectedly oppressive information environment: the world forced into full view and inciting comparison, jealousy, envy, and hatred stemming from routine and ubiquitous frustrations and humiliations as we each struggle in life getting our personal share of attention, renown, and reward.

Another reason Mishra provides for our collective anger is a deep human yearning not for anarchism or radical freedom but rather for belonging and absorption within a meaningful social context. This reminds me of Erich Fromm’s book Escape from Freedom (1941), which I read long ago but can’t remember so well anymore. I do remember quite vividly how counter-intuitive was the suggestion that absolute freedom is actually burdensome as distinguished from the usual programming we get about breaking free of all restraints. (Freedom! Liberty!) Indeed, Mishra provides a snapshot of multiple cultural and intellectual movements from the past two centuries where abandoning oneself to a cause, any cause, was preferable to the boredom and nothingness of everyday life absent purpose other than mere existence. The modern substitute for larger purpose — commodity culture — is a mere shadow of better ways of spending one’s life. Maybe commodity culture is better than sacrificing one’s life fighting wars (a common fate) or destroying others, but that’s a much longer, more difficult argument.

More to follow as my reading progresses.

Some while back, Scott Adams (my general disdain for him noted but unexpanded, since I’m not in the habit of shitting on people), using his knowledge of hypnosis, began pushing the string selling the narrative that our Commander-in-Chief is cannily adept at the art of persuasion. I, for one, am persuaded by neither Adams nor 45 but must admit that many others are. Constant shilling for control of narratives by agents of all sorts could not be more transparent (for me at least), rendering the whole enterprise null. Similarly, when I see an advertisement (infrequently, I might add, since I use ad blockers and don’t watch broadcast TV or news programs), I’m rarely inclined to seek more information or make a purchase. Once in a long while, an ad creeps through my defenses and hits one of my interests, and even then, I rarely respond because, duh, it’s an ad.

In the embedded video below, Stuart Ewen describes how some learned to exploit a feature (not a bug) in human cognition, namely, appeals to emotion that overwhelm rational response. The most obvious, well-worn example is striking fear into people’s hearts and minds to convince them of an illusion of safety necessitating relinquishing civil liberties and/or fighting foreign wars.

The way Ewen uses the term consciousness differs from the way I use it. He refers specifically to opinion- and decision-making (the very things vulnerable to manipulation) rather than the more generalized and puzzling property of having an individual identity or mind and with it self-awareness. In fact, Ewen uses the terms consciousness industry and persuasion industry instead of public relations and marketing to name those who spin information and thus public discourse. At some level, absolutely everyone is guilty of seeking to persuade others, which again is a basic feature of communication. (Anyone negotiating the purchase of, say, a new or used car faces the persuasion of the sales agent with some skepticism.) What turns it into something maniacal is using lies and fabrication to advance agendas against the public interest, especially where public opinion is already clear.

Ewen also points to early 20th-century American history, where political leaders and marketers were successful in manipulating mass psychology in at least three ways: 1. drawing the pacifist U.S. public into two world wars of European origin, 2. transforming citizens into consumers, thereby saving capitalism from its inherently self-destructive endgame (creeping up on us yet again), and 3. suppressing emergent collectivism, namely, socialism. Of course, unionism as a collectivist institution still gained considerable strength but only within the larger context of capitalism, e.g., achieving the American Dream in purely financial terms.

So getting back to Scott Adams’ argument, the notion that the American public is under some form of mass hypnosis (persuasion) and that 45 is the master puppeteer is perhaps half true. Societies do sometimes go mad and fall under the spell of a mania or cult leader. But 45 is not the driver of the current episode, merely the embodiment. I wouldn’t say that 45 figured out anything because that awards too much credit to presumed understanding and planning. Rather, he worked out (accidentally and intuitively — really by default considering his job in 2016) that his peculiar self-as-brand could be applied to politics by treating it all as reality TV, which by now everyone knows is its own weird unreality the same way professional wrestling is fundamentally unreal. (The term political theater applies here.) He demonstrated a knack (at best) for keeping the focus firmly on himself and driving ratings (abetted by the mainstream media that had long regarded him as a clown or joke), but those objectives were never really in service of a larger political vision. In effect, the circus brought to town offers its own bizarre constructed narrative, but its principle characteristic is gawking, slack-jawed, made-you-look narcissism, not any sort of proper guidance or governance.

I’ve written a different form of this blog post at least once before, maybe more. Here’s the basic thesis: the bizarro unreality of the world in which we now live is egregious enough to make me wonder if we haven’t veered wildly off the path at some point and now exist within reality prime. I suppose one can choose any number of historical inflections to represent the branching point. For me, it was the reelection of George W. Bush in 2004. (The 9/11 attacks and “wars” in Afghanistan and Iraq had already occurred or commenced by then, and it had already revealed as well that lies — Saddam had WMDs — that sold the American public on the Iraq “war” were effective and remain so today.) Lots of other events changed the course of history, but none other felt as much to me like a gut punch precisely because, in the case of the 2004 presidential election, we chose our path. I fantasized waking up from my reality-prime nightmare but eventually had to grudgingly accept that if multiverses exist, ours mine had become one where we chose (collectively, and just barely) to keep in office an executive who behaved like a farce of stupidity. Well, joke’s on us. Twelve years later, we chose someone even more stupid, though with a “certain serpentine cunning,” and with arguably the worst character of any U.S. executive in living history.

So what to do in the face of this dysfunctional state of affairs? Bret Weinstein below has ideas. (As usual, I’m quite late, embedding a video that by Internet standards is already ancient. I also admit this is equivalent to a smash cut because I don’t have a particularly good transition or justification for turning so suddenly to Weinstein.) Weinstein is an evolutionary biologist, so no surprise that the approach he recommends is borne out of evolutionary thinking. In fairness, a politician would logically recommend political solutions, a financier would recommend economic solutions, and other professionals would seek solutions from within their areas of expertise.

The title of the interview is “Harnessing Evolution,” meaning Weinstein suggests we use evolutionary models to better understand our own needs and distortions to guide or plot proper path(s) forward and get back on track. Never mind that a healthy minority of the U.S. public rejects evolution outright while an additional percentage takes a hybrid stance. While I’m impressed that Weinstein has an answer for everything (pedagogue or demagogue or both?) and has clearly thought through sociopolitical issues, I daresay he’s living in reality double-prime if he thinks science education can be a panacea for what ails us. My pessimism is showing.

As a student, practitioner, and patron of the fine arts, I long ago imbibed the sybaritic imploration that beauty and meaning drawn out of sensory stimulation were a significant source of enjoyment, a high calling even. Accordingly, learning to decode and appreciate the conventions of various forms of expression required effort, which was repaid and deepened over a lifetime of experience. I recognize that, because of their former close association with the European aristocracy and American moneyed class, the fine arts (Western genres) have never quite distanced themselves from charges of elitism. However, I’ve always rejected that perspective. Since the latter part of the 20th century, the fine arts have never been more available to people of all walks of life, as crowds at art galleries attest.

Beyond the fine arts, I also recognize that people have a choice of aesthetics. Maybe it’s the pageantry of sports (including the primal ferocity of combat sports); the gastronomic delight of a fine meal, liquor, or cigar; identification with a famous brand; the pampered lifestyles of the rich and famous, with their premium services, personal staffs, and entourages; the sound of a Harley-Davidson motorcycle or a 1970s American muscle car; the sartorial appointments of high fashion and couture; simple biophilia; the capabilities of a smartphone or other tech device; or the brutal rhetoric and racehorse politics of the campaign trail. Take your pick. In no way do I consider the choice of one aesthetic versus another equivalent. Differences of quality and intent are so obvious that any relativist claim asserting false equivalence ought to be dismissed out of hand. However, there is considerable leeway. One of my teachers summed up taste variance handily: “that’s why they make chocolate and vanilla.”

Beauty and meaning are not interchangeable, but they are often sloppily conflated. The meaning found in earnest striving and sacrifice is a quintessential substitute for beauty. Thus, we’re routinely instructed to honor our troops for their service. Patriotic holidays (Independence Day, Memorial Day, Veterans Day, and others) form a thematic group. Considering how the media reflexively valorizes (rarely deploring) acts of force and mayhem authorized and carried out by the state, and how the citizenry takes that instruction and repeats it, it’s fair to say that an aesthetic attaches to such activity. For instance, some remember (with varying degrees of disgust) news anchor Brian Williams waxing rhapsodic over the Syrian conflict. Perhaps Chris Hedges’ book War is a Force That Gives Us Meaning provides greater context. I haven’t read the book, but the title is awfully provocative, which some read as an encomium to war. Book jacket blurbs and reviews indicate more circumspect arguments drawn from Hedges’ experience as a war correspondent.

We’re currently in the so-called season of giving. No one can escape anymore marketing harangues about Black Friday, Small Business Saturday, and Cyber Monday that launch the season. None of those days have much integrity, not that they ever did, since they bleed into each other as retailers strain to get a jump on one or extend another. We’re a thoroughly consumer society, which is itself an aesthetic (maybe I should have written anesthetic). Purchasing decisions are made according to a choice of aesthetics: brand, features, looks, price, etc. An elaborate machinery of psychological prods and inducements has been developed over the decades to influence consumer behavior. (A subgenre of psychology also studies these influences and behaviors.) The same can be said of the shaping of consumer citizen opinion. While some resist being channeled into others’ prescribed thought worlds, the difficulty of maintaining truly original, independent thought in the face of a deluge of both reasonable and bad-faith influence makes succumbing nearly inevitable. Under such condition, one wonders if choice of aesthetic even really exists.

From time to time, I admit that I’m in no position to referee disputes, usually out of my lack of technical expertise in the hard sciences. I also avoid the impossibility of policing the Internet, assiduously pointing out error where it occurs. Others concern themselves with correcting the record and/or reinterpreting argument with improved context and accuracy. However, once in a while, something crosses my desk that gets under my skin. An article by James Ostrowski entitled “What America Has Done To its Young People is Appalling,” published at LewRockwell.com, is such a case. It’s undoubtedly a coincidence that the most famous Rockwell is arguably Norman Rockwell, whose celebrated illustrations for the Saturday Evening Post in particular helped reinforce a charming midcentury American mythology. Lew Rockwell, OTOH, is described briefly at the website’s About blurb:

The daily news and opinion site LewRockwell.com was founded in 1999 by anarcho-capitalists Lew Rockwell … and Burt Blumert to help carry on the anti-war, anti-state, pro-market work of Murray N. Rothbard.

Those political buzzwords probably deserve some unpacking. However, that project falls outside my scope. In short, they handily foist blame for what ills us in American culture on government planning, as distinguished from the comparative freedom of libertarianism. Government earns its share of blame, no doubt, especially with its enthusiastic prosecution of war (now a forever war); but as snapshots of competing political philosophies, these buzzwords are reductive almost to the point of meaninglessness. Ostrowski lays blame more specifically on feminism and progressive big government and harkens back to an idyllic 1950s nuclear family fully consonant with Norman Rockwell’s illustrations, thus invoking the nostalgic frame.

… the idyllic norm of the 1950’s, where the mother typically stayed home to take care of the kids until they reached school age and perhaps even long afterwards, has been destroyed.  These days, in the typical American family, both parents work fulltime which means that a very large percentage of children are consigned to daycare … in the critical first five years of life, the vast majority of Americans are deprived of the obvious benefits of growing up in an intact family with the mother at home in the pre-school years. We baby boomers took this for granted. That world is gone with the wind. Why? Two main reasons: feminism and progressive big government. Feminism encouraged women to get out of the home and out from under the alleged control of husbands who allegedly controlled the family finances.

Problem is, 1950s social configurations in the U.S. were the product of a convergence of historical forces, not least of which were the end of WWII and newfound American geopolitical and economic prominence. More pointedly, an entire generation of young men and women who had deferred family life during perilous wartime were then able to marry, start families, and provide for them on a single income — typically that of the husband/father. That was the baby boom. Yet to enjoy the benefits of the era fully, one probably needed to be a WASPy middle-class male or the child of one. Women and people of color fared … differently. After all, the 1950s yielded to the sexual revolution and civil rights era one decade later, both of which aimed specifically to improve the lived experience of, well, women and people of color.

Since the 1950s were only roughly 60 years ago, it might be instructive to consider how life was another 60 years before then, or in the 1890s. If one lived in an eastern American city, life was often a Dickensian dystopia, complete with child labor, poorhouses, orphanages, asylums, and unhygienic conditions. If one lived in an agrarian setting, which was far more prevalent before the great 20th-century migration to cities, then life was frequently dirt-poor subsistence and/or pioneer homesteading requiring dawn-to-dusk labor. Neither mode yet enjoyed social planning and progressive support including, for example, sewers and other modern infrastructure, public education, and economic protections such as unionism and trust busting. Thus, 19th-century America might be characterized fairly as being closer to anarcho-capitalism than at any time since. One of its principal legacies, one must be reminded, was pretty brutal exploitation of (and violence against) labor, which can be understood by the emergence of political parties that sought to redress its worst scourges. Hindsight informs us now that reforms were slow, partial, and impermanent, leading to the observation that among all tried forms of self-governance, democratic capitalism can be characterized as perhaps the least awful.

So yeah, the U.S. came a long way from 1890 to 1950, especially in terms of standard of living, but may well be backsliding as the 21st-century middle class is hollowed out (a typical income — now termed household income — being rather challenging for a family), aspirations to rise economically above one’s parents’ level no longer function, and the culture disintegrates into tribal resentments and unrealistic fantasies about nearly everything. Ostrowski marshals a variety of demographic facts and figures to support his argument (with which I agree in large measure), but he fails to make a satisfactory causal connection with feminism and progressivism. Instead, he sounds like 45 selling his slogan Make America Great Again (MAGA), meaning let’s turn back the clock to those nostalgic 1950s happy days. Interpretations of that sentiment run in all directions from innocent to virulent (but coded). By placing blame on feminism and progressivism, it’s not difficult to hear anyone citing those putative causes as an accusation that, if only those feminists and progressives (and others) had stayed in their assigned lanes, we wouldn’t be dealing now with cultural crises that threaten to undo us. What Ostrowski fails to acknowledge is that despite all sorts of government activity over the decades, no one in the U.S. is steering the culture nearly as actively as in centrally planned economies and cultures, current and historical, which in their worst instances are fascist and/or totalitarian. One point I’ll agree on, however, just to be charitable, is that the mess we’ve made and will leave to youngsters is truly appalling.

I caught the presentation embedded below with Thomas L. Friedman and Yuval Noah Harari, nominally hosted by the New York Times. It’s a very interesting discussion but not a debate. For this now standard format (two or more people sitting across from each other with a moderator and an audience), I’m pleased to observe that Friedman and Harari truly engaged each others’ ideas and behaved with admirable restraint when the other was speaking. Most of these talks are rude and combative, marred by constant interruptions and gotchas. Such bad behavior might succeed in debate club but makes for a frustratingly poor presentation. My further comments follow below.

With a topic as open-ended as The Future of Humanity, arguments and support are extremely conjectural and wildly divergent depending on the speaker’s perspective. Both speakers here admit their unique perspectives are informed by their professions, which boils down to biases borne out of methodology, and to a lesser degree perhaps, personality. Fair enough. In my estimation, Harari does a much better job adopting a pose of objectivity. Friedman comes across as both salesman and a cheerleader for human potential.

Both speakers cite a trio of threats to human civilization and wellbeing going forward. For Harari, they’re nuclear war, climate change, and technological disruption. For Friedman, they’re the market (globalization), Mother Nature (climate change alongside population growth and loss of diversity), and Moore’s Law. Friedman argues that all three are accelerating beyond control but speaks of each metaphorically, such as when refers to changes in market conditions (e.g., from independent to interdependent) as “climate change.” The biggest issue from my perspective — climate change — was largely passed over in favor of more tractable problems.

Climate change has been in the public sphere as the subject of considerable debate and confusion for at least a couple decades now. I daresay it’s virtually impossible not to be aware of the horrific scenarios surrounding what is shaping up to be the end of the world as we know it (TEOTWAWKI). Yet as a global civilization, we’ve barely reacted except with rhetoric flowing in all directions and some greenwashing. Difficult to assess, but perhaps the appearance of more articles about surviving climate change (such as this one in Bloomberg Businessweek) demonstrates that more folks recognize we can no longer stem or stop climate change from rocking the world. This blog has had lots to say about the collapse of industrial civilization being part of a mass extinction event (not aimed at but triggered by and including humans), so for these two speakers to cite but then minimize the peril we face is, well, façile at the least.

Toward the end, the moderator finally spoke up and directed the conversation towards uplift (a/k/a the happy chapter), which almost immediately resulted in posturing on the optimism/pessimism continuum with Friedman staking his position on the positive side. Curiously, Harari invalidated the question and refused to be pigeonholed on the negative side. Attempts to shoehorn discussions into familiar if inapplicable narratives or false dichotomies are commonplace. I was glad to see Harari calling bullshit on it, though others (e.g., YouTube commenters) were easily led astray.

The entire discussion is dense with ideas, most of them already quite familiar to me. I agree wholeheartedly with one of Friedman’s remarks: if something can be done, it will be done. Here, he refers to technological innovation and development. Plenty of prohibitions throughout history not to make available disruptive technologies have gone unheeded. The atomic era is the handy example (among many others) as both weaponry and power plants stemming from cracking the atom come with huge existential risks and collateral psychological effects. Yet we prance forward headlong and hurriedly, hoping to exploit profitable opportunities without concern for collateral costs. Harari’s response was to recommend caution until true cause-effect relationships can be teased out. Without saying it manifestly, Harari is citing the precautionary principle. Harari also observed that some of those effects can be displaced hundreds and thousands of years.

Displacements resulting from the Agrarian Revolution, the Scientific Revolution, and the Industrial Revolution in particular (all significant historical “turnings” in human development) are converging on the early 21st century (the part we can see at least somewhat clearly so far). Neither speaker would come straight out and condemn humanity to the dustbin of history, but at least Harari noted that Mother Nature is quite keen on extinction (which elicited a nervous? uncomfortable? ironic? laugh from the audience) and wouldn’t care if humans were left behind. For his part, Friedman admits our destructive capacity but holds fast to our cleverness and adaptability winning out in the end. And although Harari notes that the future could bring highly divergent experiences for subsets of humanity, including the creation of enhanced humans to and reckless dabbling with genetic engineering, I believe cumulative and aggregate consequences of our behavior will deposit all of us into a grim future no sane person should wish to survive.

I mentioned blurred categories in music a short while back. An interesting newsbit popped up in the New York Times recently about this topic. Seems a political science professor at SUNY New Paltz, Gerald Benjamin, spoke up against a Democratic congressional candidate for New York’s 18th district, Antonio Delgado, the latter of whom had been a rapper. Controversially, Benjamin said that rap is not real music and does not represent the values of rural New York. Naturally, the Republican incumbent got in on the act, too, with denunciations and attacks. The electoral politics angle doesn’t much interest me; I’m not in or from the district. Moreover, the racial and/or racist elements are so toxic I simply refuse to wade in. But the professorial pronouncement that rap music isn’t really music piqued my interest, especially because that argument caused the professor to be sanctioned by his university. Public apologies and disclaimers were issued all around.

Events also sparked a fairly robust commentary at Slipped Disc. The initial comment by V.Lind echoes my thinking pretty well:

Nobody is denying that hip-hop is culturally significant. As such it merits study — I have acknowledged this … The mystery is its claims to musical credibility. Whatever importance rap has is in its lyrics, its messages (which are far from universally salutory [sic]) and its general attempt to self-define certain communities — usually those with grievances, but also those prepared to develop through violence, sexism and other unlovely aspects. These are slices of life, and as such warrant some attention. Some of the grievances are well-warranted …

But music? Not. I know people who swear by this genre, and their ears are incapable of discerning anything musical in any other. If they wanted to call it poetry (which I daresay upon scrutiny would be pretty bad poetry) it would be on stronger legs. But it is a “music” by and for the unmusical, and it is draining the possibility of any other music out of society as the ears that listen to it hear the same thing, aside from the words, for years on end.

Definitely something worth studying. How the hell has this managed to become a dominant force in what is broadly referred to as the popular music world?

The last time (as memory serves) categories or genres blurred leading to outrage was when Roger Ebert proclaimed that video games are not art (and by inference that cinema is art). Most of us didn’t really care one way or the other where some entertainment slots into a category, but gamers in particular were scandalized. In short, their own ox was gored. But when it came to video games as art, there were no racial undertones, so the sometimes heated debate was at least free of that scourge. Eventually, definitions were liberalized, Ebert acknowledged the opposing opinion (I don’t think he was ever truly convinced, but I honestly can’t remember — and besides, who cares?), and it all subsided.

The impulse to mark hard, discrete boundaries between categories and keep unlike things from touching is pretty foolish to me. It’s as though we’re arguing about the mashed potatoes and peas not infecting each other on the dinner plate with their cooties. Never mind that it all ends up mixed in the digestive tract before finally reemerging as, well, you know. Motivation to keep some things out is no doubt due to prestige and cachet, where the apparent interloper threatens to change the status quo somehow, typically infecting it with undesirable and/or impure elements. We recognize this fairly readily as in-group and out-group, an adolescent game that ramps up, for instance, when girls and boys begin to differentiate in earnest at the onset of puberty. Of course, in the last decade, so-called identitarians have been quite noisome about their tribal affiliations self-proclaimed identities, many falling far, far out of the mainstream, and have demanded they be taken seriously and/or granted status as a protected class.

All this extends well beyond the initial topic of musical or artistic styles and genres. Should be obvious, though, that we can’t escape labels and categories. They’re a basic part of cognition. If they weren’t, one would have to invent them at every turn when confronting the world, judging safe/unsafe, friend/foe, edible/inedible, etc. just to name a few binary categories. Complications multiply quickly when intermediary categories are present (race is the most immediate example, where most of us are mixtures or mutts despite whatever our outer appearance may be) or categories are blurred. Must we all then rush to restore stability to our understanding of the world by hardening our mental categories?