Archive for the ‘Artistry’ Category

Ours is an era when individuals are encouraged to explore, amplify, and parade various attributes of their identities out in public, typically via social media. For those just coming of age and/or recently having entered adulthood, because identity is not yet fully formed, defining oneself is more nearly a demand. When identity is further complicated by unusual levels of celebrity, wealth, beauty, and athleticism (lots of overlap there), defining oneself is often an act of rebellion against the perceived demands of an insatiable public. Accordingly, it was unsurprising to me at least to learn of several well-known people unhappy with their lives and the burdens upon them.

Regular folks can’t truly relate the glitterati, who are often held up aspirational models. For example, many of us look upon the discomforts of Prince Harry and Meghan Markle with a combination of perverse fascination and crocodile tears. They were undoubtedly trapped in a strange, gilded prison before repudiating the duties expected of them as “senior royals,” attempting an impossible retreat to normalcy outside of England. Should be obvious that they will continue to be hounded while public interest in them persists. Similarly, Presley Gerber made news, fell out of the news, and then got back into the news as a result of his growing collection of tattoos. Were he simply some anonymous fellow, few would care. However, he has famous parents and already launched a modeling career before his face tattoo announced his sense of being “misunderstood.” Pretty bold move. With all the presumed resources and opportunities at his disposal, many have wondered in comments and elsewhere whether another, better declaration of self might have been preferred.

Let me give these three the benefit of doubt. Although they all have numerous enviable attributes, the accident of birth (or in Markle’s case, decision to marry) landed them in exceptional circumstances. The percentage of celebrities who crack under the pressure of unrelenting attention and proceed to run off the rails is significant. Remaining grounded is no doubt easier if one attains celebrity (or absurd immense wealth) after, say, the age of 25 or even later. (On some level, we’ve all lost essential groundedness with reality, but that’s another blog post.) Those who are children of celebrities or who become child prodigies may not all be consigned to character distortion or a life irrevocably out of balance, but it’s at least so commonplace that the dangerous potential should be recognized and embraced only with wariness. I’ve heard of programs designed to help professional athletes who become sudden multimillionaires (and thus targets of golddiggers and scammers) make the transition. Good for them that structured support is available. Yet another way average folks can’t relate: we have to work things out for ourselves.

Here’s the example I don’t get: Taylor Swift. She was the subject of a Netflix biography called Miss Americana (2020) that paints her as, well, misunderstood. Thing is, Swift is a runaway success story, raking in money, fans, awards, attention, and on balance, detractors. That success is something she earnestly desired and struggled to achieve only to learn that the glossy, popstar image sold especially but nonexclusively to 14-year-old girls comes with a lot of heavy baggage. How can the tragic lives of so many musicians launched into superstardom from the late 1950s onward have escaped Swift’s awareness in our media-saturated world? Naming names is sorta tacky, so I demur, but there are lots of them. Swift obtained her heart’s desire, found her songwriting and political voice, maintains a high public profile, and shows no lack of productivity. Sure, it’s a life out of balance, not remotely normal the way most noncelebrities would understand. However, she signed up for it willingly (if naïvely) and by all accounts perpetuates it. She created her own distinctive gilded prison. I don’t envy her, nor do I particularly feel sorry for her, as the Netflix show appears to instruct.

The old saw goes that acting may be just fine as a creative endeavor, but given the opportunity, most actors really want to direct. A similar remark is often made of orchestral musicians, namely, that most rank-and-file players would really rather conduct. Directing and conducting may not be the central focus of creative work in their respective genres. After all, directors don’t normally appear onscreen and conductors make no sound. Instead, they coordinate the activities of an array of creative folks, putting directors in a unique position to bring about a singular vision in otherwise collaborative work. A further example is the Will to Power (associated with Friedrich Nietzsche and Arthur Schopenhauer) characteristic of those who wish to rule (as distinguished from those who wish to serve) such as regents, dictators, and autocrats. All of this sprang to mind because, despite outward appearance of a free, open society in the U.S., recent history demonstrates that the powers that be have instituted a directed election and directed economy quite at odds with democracy or popular opinion.

The nearest analogy is probably the directed verdict, where a judge removes the verdict from the hands or responsibility of the jury by directing the jury to return a particular verdict. In short, the judge decides the case for the jury, making the jury moot. I have no idea how commonplace directed verdicts are in practice.

Directed Election

Now that progressive candidates have been run out of the Democratic primaries, the U.S. presidential election boils down to which stooge to install (or retain) in November. Even if Biden is eventually swapped out for another Democrat in a brokered nominating convention (highly likely according to many), it’s certain to be someone fully amenable to entrenched corporate/financial interests. Accordingly, the deciders won’t be the folks who dutifully showed up and voted in their state primaries and caucuses but instead party leaders. One could try to argue that as elected representatives of the people, party leaders act on behalf of their constituencies (governing by consent of the people), but some serious straining is needed to arrive at that view. Votes cast in the primaries thus far demonstrate persistent desire for something distinctly other than the status quo, at least in the progressive wing of the Democratic party. Applying the cinematic metaphor of the top paragraph, voters are a cast of thousands millions being directed within a larger political theater toward a predetermined result.

Anyone paying attention knows that voters are rarely given options that aren’t in fact different flavors of the same pro-corporate agenda. Thus, no matter whom we manage to elect in November, the outcome has already been engineered. This is true not only by virtue of the narrow range of candidates able to maneuver successfully through the electoral gauntlet but also because of perennial distortions of the balloting process such as gerrymandering, voter suppression, and election fraud. Claims that both sides (really just one side) indulge in such practices so everything evens out don’t convince me.

Directed Economy

Conservative economists and market fundamentalists never seem to tire of arguments in the abstract that capitalist mechanisms of economics, left alone (unregulated, laissez-faire) to work their magic, deliver optimal outcomes when it comes to social and economic justice. Among the primary mechanisms is price discovery. However, economic practice never even remotely approaches the purity of abstraction because malefactors continuously distort and game economic systems out of self-interest greed. Price discovery is broken and equitable economic activity is made fundamentally fictitious. For example, the market for gemstones is famously inflated by a narrow consortium of sellers having successfully directed consumers to adopt a cultural standard of spending three months’ wages/salary for a wedding band as a demonstration of one’s love and devotion. In the opposite direction, precious metal spot prices are suppressed despite very high demand and nearly nonexistent supply. Current quoted premiums over spot silver price, even though no delivery is contemplated, range from roughly 20% to an absurd 2,000%. Supply and demand curves no longer function to aid in true price discovery (if such a thing ever existed). In a more banal sense, what people are willing to pay for a burger at a fast food joint or a loaf of bread at the grocery may affect the price charged more directly.

Nowhere is it more true that we’ve shifted to a directed economy than with the stock market (i.e., Wall Street vs. Main Street). As with the housing market, a real-world application with which many people have personal experience, if a buyer of a property or asset fails to appear within a certain time frame (longer for housing, shorter for stock, bonds, and other financial instruments), the seller is generally obliged to lower the price until a buyer finally appears. Some housing markets extraordinarily flush with money (e.g., Silicon Valley and Manhattan) trigger wild speculation and inflated prices that drive out all but the wealthiest buyers. Moreover, when the eventual buyer turns out to be a bank, corporation, or government entity willing to overpay for the property or asset using someone else’s money, the market becomes wholly artificial. This has been the case with the stock market for the last twelve years, with cheap money being injected nonstop via bailouts and quantitative easing to keep asset prices inflated. When fundamental instabilities began dragging the stock market down last fall, accelerating precipitous in early spring of this year and resulting in yet another crash (albeit brief), the so-called Plunge Protection Team sprang into action and wished trillions of dollars (taxpayer debt, actually, and over the objections of taxpayers in a classic fool-me-once scenario) into existence to perpetuate the casino economy and keep asset prices inflated for the foreseeable future, which isn’t very long.

The beneficiaries of this largesse are the same as they have always been when tax monies and public debt are concerned: corporations, banks, and the wealthy. Government economic supports are directed to these entities, leaving all others in the lurch. Claims that bailouts to keep large corporate entities and wealthy individuals whole so that the larger economy doesn’t seize up and fail catastrophically are preposterous because the larger economy already has seized up and failed catastrophically while the population is mostly quarantined, throwing many individuals out of work and shuttering many businesses. A reasonable expectation of widespread insolvency and bankruptcy lingers, waiting for the workouts and numbers to mount up.

The power of the purse possessed by the U.S. Congress hasn’t been used to help the citizenry since the New Deal era of FDR. Instead, military budgets and debts expand enormously while entitlements and services to the needy and vulnerable are whittled away. Citizen rebellions are already underway in small measure, mostly aimed at the quarantines. When bankruptcies, evictions, and foreclosures start to swell, watch out. Our leaders’ fundamental mismanagement of human affairs is unlikely to be swallowed quietly.

Purpose behind consumption of different genres of fiction varies. For most of us, it’s about responding to stimuli and experiencing emotions vicariously, which is to say, safely. For instance, tragedy and horror can be enjoyed, if that’s the right word, in a fictional context to tweak one’s sensibilities without significant effect outside the story frame. Similarly, fighting crime, prosecuting war, or repelling an alien invasion in a video game can be fun but is far removed from actually doing those things in real life (not fun). For less explicitly narrative forms, such as music, feelings evoked are aesthetic and artistic in nature, which makes a sad song or tragic symphony enjoyable on its own merits without bleeding far into real sadness or tragedy. Cinema (now blurred with broadcast TV and streaming services) is the preeminent storytelling medium that provoke all manner of emotional response. After reaching a certain age (middle to late teens), emotional detachment from depiction of sexuality and violent mayhem makes possible digestion of such stimulation for the purpose of entertainment — except in cases where prior personal trauma is triggered. Before that age, nightmare-prone children are prohibited.

Dramatic conflict is central to driving plot and story forward, and naturally, folks are drawn to some stories while avoiding others. Although I’m detached enough not to be upset by, say, zombie films where people and zombies alike are dispatched horrifically, I wouldn’t say I enjoy gore or splatter. Similarly, realistic portrayals of war (e.g., Saving Private Ryan) are not especially enjoyable for me despite the larger story, whether based on true events or entirely made up. The primary reason I leave behind a movie or TV show partway through is because I simply don’t enjoy watching suffering.

Another category bugs me even more: when fiction intrudes on reality to remind me too clearly of actual horrors (or is it the reverse: reality intruding on fiction?). It doesn’t happen often. One of the first instances I recall was in Star Trek: The Next Generation when the story observed that (fictional) warp travel produced some sort of residue akin to pollution. The reminder that we humans are destroying the actual environment registered heavily on me and ruined my enjoyment of the fictional story. (I also much prefer the exploration and discovery aspects of Star Trek that hew closer to Gene Roddenberry’s original vision than the militaristic approach now central to Star Trek.) A much more recent intrusion occurs in the rather adolescent TV show The 100, where a global nuclear exchange launched by an artificial intelligence has the follow-on effect a century later of remaining nuclear sites going critical, melting down, and irradiating the Earth, making it uninhabitable. This bothers me because that’s my expectation what happens in reality, probably not too long (decades) after industrial civilization collapses and most or all of us are dead. This prospect served up as fiction is simply too close to reality for me to enjoy vicariously.

Another example of fiction intruding too heavily on my doomer appreciation of reality occurred retroactively. As high-concept science fiction, I especially enjoyed the first Matrix movie. Like Star Trek, the sequels degraded into run-of-the-mill war stories. But what was provocative about the original was the matrix itself: a computer-generated fiction situated within a larger reality. Inside the matrix was pleasant enough (though not without conflict), but reality outside the matrix was truly awful. It was a supremely interesting narrative and thought experiment when it came out in 1999. Now twenty-one years later, it’s increasingly clear that we are living in a matrix-like, narrative-driven hyperreality intent on deluding ourselves of a pleasant equilibrium that simply isn’t in evidence. In fact, as societies and as a civilization, we’re careening out of control, no brakes, no steering. Caitlin Johnstone explores this startling after-the-fact realization in an article at Medium.com, which I found only a couple days ago. Reality is in fact far worse than the constructed hyperreality. No wonder no one wants to look at it.

That man is me. Thrice in the last month I’ve stumbled headlong into subjects where my ignorance left me grasping in the dark for a ledge or foothold lest I be swept into a maelstrom of confusion by someone’s claims. This sensation is not unfamiliar, but it’s usually easy to beat back. Whereas I possess multiple areas of expertise and as an autodidact am constantly absorbing information, I nonetheless recognize that even in areas where I consider myself qualified to act and/or opine confidently, others possess authority and expertise far greater than mine. Accordingly, I’ve always considered myself a generalist. (A jack of all trades is not quite the same thing IMO, but I decline to draw that distinction here.)

Decisions must inevitably be made on insufficient information. That’s true because more information can always be added on top, which leads to paralysis or infinite regress if one doesn’t simply draw an arbitrary line and stop dithering. This is also why I aver periodically that consciousness is based on sufficiency, meaning “good enough.” A paradox exists between a decision being good enough to proceed despite the obvious incompleteness of information that allows for full, balanced analysis, if fullness can even be achieved. Knowledge is thus sufficient and insufficient at the same time. Banal, everyday purchasing decisions at the grocery store are low risk. Accepting a job offer, moving to a new city, and proposing marriage carry significant risks but are still decisions made on insufficient information precisely because they’re prospective. No way of knowing with certainty how things will turn out. (more…)

Color me surprised to learn that 45 is considering a new executive order mandating that the “classical architectural style shall be the preferred and default style” for new and upgraded federal buildings, revising the Guiding Principles for Federal Architecture issued in 1962. Assuredly, 45 is hardly expected to weigh in on respectable aesthetic choices considering his taste runs toward gawdy, glitzy, ostentatious surface display (more Baroque) than restraint, dignity, poise, and balance (more Classical or Neoclassical).

Since I pay little attention to mainstream news propaganda organs, I learned of this from James Howard Kunstler’s blog Clusterfuck Nation (see blogroll) as though the order had already issued, but it’s apparently still in drafting. Twas nice to read Kunstler returning to his roots in architectural criticism. He’s never left it behind entirely; his website has a regular feature called Eyesore of the Month, which I rather enjoy reading. He provides a brief primer how architectural styles in the 20th century (all lumped together as Modernism) embody the Zeitgeist, namely, techno-narcissism. (I’m unconvinced that Modernism is a direct rebuke of 20th-century fascists who favored Classicism.) Frankly, with considerably more space at his disposal, Iain McGilchrist explores Modernist architecture better and with far greater erudition in The Master and his Emissary (2010), which I blogged through some while ago. Nonetheless, this statement by Kunstler deserves attention:

The main feature of this particular moment is that techno-industrial society has entered an epochal contraction presaging collapse due to over-investments in hyper-complexity. That hyper-complexity has come to be perfectly expressed in architecture lately in the torqued and tortured surfaces of gigantic buildings designed by computers, with very poor prospects for being maintained, or even being useful, as we reel into a new age of material scarcity and diminished expectations …

This is the life-out-of-balance statement in a nutshell. We are over-extended and wedded to an aesthetic of power that requires preposterous feats of engineering to build and continuous resource inputs to operate and maintain. (Kunstler himself avers elsewhere that an abundance of cheap, easily harvested energy enabled the Modern Era, so chalking up imminent collapse due primarily to over-investment in hyper-complexity seems like substitution of a secondary or follow-on effect for the main one.) My blogging preoccupation with skyscrapers demonstrates my judgment that the vertical dimension of the human-built world in particular is totally out of whack, an instantiation of now-commonplace stunt architecture. Should power ever fail for any sustained duration, reaching floors above, say, the 10th and delivering basic services to them, such as water for sinks and toilets, quickly becomes daunting.

However, that’s a technical hurdle, not an aesthetic consideration. The Modernist government buildings in question tend to be Brutalist designs, which often look like high-walled concrete fortresses or squat, impenetrable bunkers. (Do your own image search.) They project bureaucratic officiousness and disconcern if not open hostility toward the people they purport to serve. Basically, enter at your own risk. They share with the International Style a formal adherence to chunky geometric forms, often presented impassively (as pure abstraction) or in an exploded view (analogous to a cubist painting showing multiple perspectives simultaneously). Curiously, commentary at the links above is mostly aligned with perpetuating the Modernist project and aesthetic as described by Kunstler and McGilchrist. No interruptions, difficulties, or vulnerabilities are contemplated. Commentators must not be reading the same analyses I am, or they’re blithely supportive of progress in some vague sense, itself a myth we tell ourselves.

One of the victims of cancel culture, coming to my attention only days ago, is Kate Smith (1907–1986), a singer of American popular song. Though Smith had a singing career spanning five decades, she is best remembered for her version(s) of Irving Berlin’s God Bless America, which justifiably became a bit of Americana. The decades of Smith’s peak activity were the 1930s and 40s.

/rant on

I dunno what goes through people’s heads, performing purity rituals or character excavation on folks long dead. The controversy stems from Smith having a couple other songs in her discography: That’s Why Darkies Were Born (1931) and Pickaninny Heaven from the movie Hello, Everybody! (1933). Hate to break it anyone still living under a rock, but these dates are not far removed from minstrelsy, blackface, and The Birth of a Nation (1915) — a time when typical Americans referred to blacks with a variety of terms we now consider slurs. Such references were still used during the American civil rights movement (1960s) and are in use among some virulent white supremacists even today. I don’t know the full context of Kate Smith having sung those songs, but I suspect I don’t need to. In that era, popular entertainment had few of the sensibilities regarding race we now have (culture may have moved on, but it’s hard to say with a straight face it’s evolved or progressed humanely), and uttering commonly used terms back then was not automatic evidence of any sort of snarling racism.

I remember having heard my grandparents, nearly exact contemporaries of Kate Smith, referring to blacks (the term I grew up with, still acceptable I think) with other terms we no longer consider acceptable. It shocked me, but to them, that’s simply what blacks were called (the term(s) they grew up with). Absolutely nothing in my grandparents’ character or behavior indicated a nasty, racist intent. I suspect the same was true of Kate Smith in the 1930s.

Back when I was a librarian, I also saw plenty of sheet music published before 1920 or so with the term darkie (or darkey) in the title. See for example this. The Library of Congress still uses the subject headings “negro spirituals” (is there another kind?) and “negro songs” to refer to various subgenres of American folk song that includes slave songs, work songs, spirituals, minstrel music, protest songs, etc. Maybe we should cancel the Library of Congress. Some published music titles from back then even call them coon songs. That last one is totally unacceptable today, but it’s frankly part of our history, and like changing character names in Mark Twain’s Huckleberry Finn, sanitizing the past does not make it go away or any less discomfiting. But if you wanna bury your head in the sand, go ahead, ostrich.

Also, if some person or entity ever does some questionably racist, sexist, or malign thing (even something short of abominable) situated contextually in the past, does that mean he, she, or it must be cancelled irrevocably? If that be the case, then I guess we gotta cancel composer Richard Wagner, one of the most notorious anti-Semites of the 19th century. Also, stop watching Pixar, Marvel, and Star Wars films (among others), because remember that time when Walt Disney Studios (now Walt Disney Company) made a racist musical film, Song of the South (1946)? Disney’s tainted legacy (extending well beyond that one movie) is at least as awful as, say, Kevin Spacey, and we’re certainly not about to rehabilitate him.

/rant off

Magnitude

Posted: January 6, 2020 in Artistry, Corporatism, Culture, Science
Tags: ,

Something I read somewhere (lost track of what and when) sparked some modest inquiry into the mathematical concept of magnitude, or more specifically, the order of magnitude. I suspect, consistent with the doomer themes of this blog, that it was a statement to the effect that the sixth extinction (or Holocene extinction if you prefer) is proceeding at some order of magnitude faster than previous mass extinction events.

Within various scientific fields, magnitude has specific and specialized meanings. For instance, the Richter Scale, used to denote the power of earthquakes, is a familiar though poorly understood measure reported in the aftermath of an event. Magnitudes of distance and time are more immediately understood in the mundane sense of how far/long to travel somewhere (via foot, bicycle, car, train, plane, etc.) and more exotically outside Earth orbit as depicted in science fiction. Perhaps the most cognitively accessible illustration of magnitude, however, is scale. Arguably, size (absolute?) and scale (comparative?) are intertwined with distance, or even more broadly, time-space. I’ll leave that discussion to someone who knows better than I do.

All that said, I recalled from boyhood a short film depicting scale in terms of Powers of Ten. Unsurprisingly, I found it on YouTube (embedded below).

Perhaps it’s just my refurbishing of memory, but this film (now video) has a sense of wonder and amazement, sort of like how Disney properties (e.g., films, TV shows, theme parks, merchandise) from the 1960s and 70s retained an innocence from the time when Walt Disney himself was in charge. Early NASA orbital missions and moonshots had that quality, too, but NASA’s wonder years dissipated around the time space shuttles went into service, demonstrating that NASA’s primary objective was neither technical innovation nor exploration anymore but rather commerce, namely, putting satellites into orbit for communications services. Just this past year, the risible U.S. Space Force, wished into existence by 45 single-handedly over all reasonable objections (not unlike the border wall with Mexico), demonstrates a similar loss of innocence. It’s essentially an attempt to patrol and/or weaponize the commons. I’d like to believe that military personnel are dutifully obeying a pointless command from the commander-in-chief and will abandon or scuttle the new military branch once 45 is out of office. Time will tell.

Loss of innocence may be inevitable in the postmodern world given our jadedness, cynicism, and oh-so-hip ironic detachment. It’s not a good look on us. For instance, once Disney went corporate, the aesthetic pioneered and guided by old Walt changed for the worse. Relatively recent acquisitions of Pixar, Marvel, and Star Wars (among others) and expansion of theme parks and resorts reveal an entertainment behemoth geared more cynically toward money-making than artistry or inspiration. The apparent insufficiency of earlier incarnations of NASA and Disney find a parallel with an updated version of Powers of Ten (not embedded), narrated by Morgan Freeman (because … why not?) and using the same basic script but employing whiz-bang graphics considerably enhanced over their 1977 counterparts. Even the pop-culture digital network Buzzfeed (not exactly a venerated news source) gets some action with its derivative, examination-lite of cosmic scale (ignoring the microscopic and subatomic):

Going back to the idea of magnitude, I’m aware of four time-scales in common use: human history, evolutionary time, geological time, and cosmic time. Few contextualize the last 2–3 centuries this way, but human activity has had substantial effects that collapse events usually occurring over evolutionary or geological time into human history. We unwittingly launched a grand terraforming project but have yet to demonstrate overriding care for the eventual outcomes. The, um, magnitude of our error cannot be overstated.

I’ve been on the sidelines of the Chicago Symphony Orchestra (CSO) musicians’ union labor action — a strike now extending into its second month with no apparent resolution in sight — and reticent to take a strong position. This might be surprising considering that I’m a natural ally of the musicians in at least two respects: (1) my support for the labor movement in general, and (2) my sustained interest in classical music as both a listener and practitioner. On balance, I have two objections that hold me back: (1) difficulty empathizing with anyone already well compensated for his or her work (CSO base salary is more than $160K per year; many make considerably more), and (2) the argument that as a premier arts institution, the organization should take no heed of economic effects being felt universally and visited on many who actually suffer deprivations beyond lost prestige.

To buttress their position, the Musicians of the CSO (why do the musicians operate a website distinct from the organization as a whole?) issued a press release in late March 2019 (PDF link). I’ve no desire to analyze it paragraph-by-paragraph, but I want to bring a few bits forward:

For more than 50 years, the Chicago Symphony Orchestra has been touted as the nation’s finest – able to draw talent from across the globe. [emphasis added]

Music is not a championship endeavor despite the plethora of televised lip-syncing singing contests. No one orchestra can lay reasonable claim to being the best. Smacks of hubris. Simply change that to “as among the nation’s finest” and I’m OK with it.

In the last seven years the Orchestra’s salary has not kept up with inflation. Further, the Orchestra’s benefit package has fallen behind that of Los Angeles and San Francisco. Now, the Association is attempting to change a fundamental tenet of the security of the Orchestra – and American life – our pension plan.

Well boo hoo for you. Many of the fundamental tenets of American life have been steadily stripped away from the population over the past 40 years or so. The very existence of a pension plan is exceptional for many in the labor force, not to mention the handsome salary and other benefits, including a 20-hour workweek, that CSO musicians enjoy. (Admittedly, a lot of outside preparation is necessary to participate effectively.) I understand that comparison with sister institutions in LA, SF, and NYC provide context, but cost of living differences at the coasts ought to be part of that context, too. Keeping up with the Joneses in this instance is a fool’s errand. And besides, those three cities suffer considerably with homeless and destitute populations that line the sidewalks and alleys. Chicago has somehow managed to displace most of its homeless population (mostly through harassment, not humanitarian aid), though one cannot avoid a phalanx of panhandlers outside Chicago Symphony Center on concert nights. Still, it’s nothing compared to conditions in downtown SF, which have gotten so bad with people living, peeing, and shitting in the street that an infamous poop map is available to help pedestrians avoid the worst of it. (I’ve no idea what the sidewalk outside Davies Symphony Hall in SF is like, but the location appears to be in the area of greatest poop concentration.) LA’s skid row is another district straight out of hell.

With many of the musicians already vested, our concern is truly about the future of the Orchestra – its ability to retain and attract great talent – a concern shared by Maestro Muti, Daniel Barenboim, and many of the world’s other finest orchestras and leaders.

This is not a concern of mine in the slightest. Sure, musicians play musical chairs, swapping around from orchestra to orchestra as opportunities arise, just like other workers traipse from job to job throughout their working lives. So what? A performing position with the CSO has long been a terminal position from which many players retire after more than 50 years of service (if they’re so fortunate to be hired by the orchestra in their 20s). I cannot estimate how many top-tier musicians forego auditions for the CSO due to perceived inadequacies with compensation or working conditions. Maybe that explains the years-long inability to hire and/or retain personnel for certain principal chairs. Still, I’m not convinced at all by “we’re the best yet we can’t compete without excessive compensation” (or shouldn’t have to). Similar arguments for ridiculously inflated CEO pay to attract qualified individuals fall on deaf ears.

An overview of the musicians’ strike was published by Lawrence A. Johnson at Chicago Classical Review, which provides details regarding the musicians’ demands. According to Johnson, the public’s initial support of the strike has turned sour. Comments I’ve been reading and my own reaction have followed exactly this trajectory. Lawrence also uses the term tone deaf to describe the musicians, though he’s diplomatic enough to avoid saying it himself, noting that the charge comes from commentators. I won’t be nearly so diplomatic. Musicians, stop this nonsense now! Demands far in excess of need, far in excess of typical workers’ compensation, and far in excess of your bargaining position do you no credit. In addition, although season ticket holders may express dismay at lost opportunities to hear certain concerts, soloists, and repertoire due to the work stoppage, the CSO is not a public utility that must keep working to maintain public wellbeing. Alternatives in greater Chicagoland can easily take up your slack for those in need of a classical music fix. Indeed, I haven’t been to a CSO concert in years because they’ve become anodyne. My CSO love affair is with the recorded legacy of the 1970s and 80s.

By striking, you’re creating a public relations nightmare that will drive people away, just as the baseball strike and take-a-knee controversy in football (and elsewhere) sent sports fans scrambling for the exits. You’re tone deaf regarding actual workplace and contract insufficiency many others confront regularly, as well as the economic realities of Chicago, Illinois, the U.S. and indeed the globe. Get over yourselves.

For ambulatory creatures, vision is arguably the primary sense of the five (main) senses. Humans are among those species that stand upright, facilitating a portrait orientation when interacting among ourselves. The terrestrial environment on which we live, however, is in landscape (as distinguished from the more nearly 3D environments of birds and insects in flight or marine life in rivers, lakes, seas, and oceans). My suspicion is that modest visual conflict between portrait and landscape is among the dynamics that give rise to the orienting response, a step down from the startle reflex, that demands full attention when visual environments change.

I recall reading somewhere that wholesale changes in surroundings, such as when crossing a threshold, passing through a doorway, entering or exiting a tunnel, and notably, entering and exiting an elevator, trigger the orienting response. Indeed, the flush of disorientation before one gets his or her bearings is tantamount to a mind wipe, at least momentarily. This response may also help to explain why small, bounded spaces such as interiors of vehicles (large and small) in motion feel like safe, contained, hermetically sealed personal spaces. We orient visually and kinesthetically at the level of the interior, often seated and immobile, rather than at the level of the outer landscape being traversed by the vehicle. This is true, too, of elevators, a modern contraption that confounds the nervous system almost as much as revolving doors — particularly noticeable with small children and pets until they become habituated to managing such doorways with foreknowledge of what lies beyond.

The built environment has historically included transitional spaces between inner and outer environments. Churches and cathedrals include a vestibule or narthex between the exterior door and inner door leading to the church interior or nave. Additional boundaries in church architecture mark increasing levels of hierarchy and intimacy, just as entryways of domiciles give way to increasingly personal spaces: parlor or sitting room, living room, dining room, kitchen, and bedroom. (The sheer utility of the “necessary” room defies these conventions.) Commercial and entertainment spaces use lobbies, atria, and prosceniums in similar fashion.

What most interests me, however, is the transitional space outside of buildings. This came up in a recent conversation, where I observed that local school buildings from the early to middle part of the 20th century have a distinguished architecture set well back from the street where lawns, plazas, sidewalks, and porches leading to entrances function as transitional spaces and encourage social interaction. Ample window space, columnar entryways, and roof embellishments such as dormers, finials, cupolas, and cornices add style and character befitting dignified public buildings. In contrast, 21st-century school buildings in particular and public buildings in general, at least in the city where I live, tend toward porchless big-box warehouses built right up to the sidewalk, essentially robbing denizens of their social space. Blank, institutional walls forbid rather than invite. Consider, for example, how students gathered in a transitional space are unproblematic, whereas those congregated outside a school entrance abutting a narrow sidewalk suggest either a gauntlet to be run or an eruption of violence in the offing. (Or maybe they’re just smoking.) Anyone forced to climb past loiterers outside a commercial establishment experiences similar suspicions and discomforts.

Beautifully designed and constructed public spaces of yore — demonstrations of a sophisticated appreciation of both function and intent — have fallen out of fashion. Maybe they understood then how transitional spaces ease the orientation response, or maybe they only intuited it. Hard to say. Architectural designs of the past acknowledged and accommodated social functions and sophisticated aesthetics that are today actively discouraged except for pointless stunt architecture that usually turns into boondoggles for taxpayers. This has been the experience of many municipalities when replacing or upgrading schools, transit centers, sports arenas, and public parks. Efficient land use today drives toward omission of transitional space. One of my regular reads is James Howard Kunstler’s Eyesore of the Month, which profiles one architectural misfire after the next. He often mocks the lack of transitional space, or when present, observes its open hostility to pedestrian use, including unnecessary obstacles and proximity to vehicular traffic (noise, noxious exhaust, and questionable safety) discouraging use. Chalk this up as another collapsed art (e.g., painting, music, literature, and poetry) so desperate to deny the past and establish new aesthetics that it has ruined itself.

Caveat: Rather uncharacteristically long for me. Kudos if you have the patience for all of this.

Caught the first season of HBO’s series Westworld on DVD. I have a boyhood memory of the original film (1973) with Yul Brynner and a dim memory of its sequel Futureworld (1976). The sheer charisma of Yul Brynner in the role of the gunslinger casts a long shadow over the new production, not that most of today’s audiences have seen the original. No doubt, 45 years of technological development in film production lends the new version some distinct advantages. Visual effects are quite stunning and Utah landscapes have never been used more appealingly in terms of cinematography. Moreover, storytelling styles have changed, though it’s difficult to argue convincingly that they’re necessarily better now than then. Competing styles only appear dated. For instance, the new series has immensely more time to develop its themes; but the ancient parables of hubris and loss of control over our own creations run amok (e.g., Shelley’s Frankenstein, or more contemporaneously, the surprisingly good new movie Upgrade) have compact, appealing narrative arcs quite different from constant teasing and foreshadowing of plot developments while actual plotting proceeds glacially. Viewers wait an awful lot longer in the HBO series for resolution of tensions and emotional payoffs, by which time investment in the story lines has been dispelled. There is also no terrifying crescendo of violence and chaos demanding rescue or resolution. HBO’s Westworld often simply plods on. To wit, a not insignificant portion of the story (um, side story) is devoted to boardroom politics (yawn) regarding who actually controls the Westworld theme park. Plot twists and reveals, while mildly interesting (typically guessed by today’s cynical audiences), do not tie the narrative together successfully.

Still, Westworld provokes considerable interest from me due to my fascination with human consciousness. The initial episode builds out the fictional future world with characters speaking exposition clearly owing its inspiration to Julian Jayne’s book The Origins of Consciousness in the Breakdown of the Bicameral Mind (another reference audiences are quite unlikely to know or recognize). I’ve had the Julian Jaynes Society’s website bookmarked for years and read the book some while back; never imagined it would be captured in modern fiction. Jaynes’ thesis (if I may be so bold as to summarize radically) is that modern consciousness coalesced around the collapse of multiple voices in the head — ideas, impulses, choices, decisions — into a single stream of consciousness perhaps better understood (probably not) as the narrative self. (Aside: the multiple voices of antiquity correspond to polytheism, whereas the modern singular voice corresponds to monotheism.) Thus, modern human consciousness arose over several millennia as the bicameral mind (the divided brain having two camera, chambers, or halves) functionally collapsed. The underlying story of the new Westworld is the emergence of machine consciousness, a/k/a strong AI, a/k/a The Singularity, while the old Westworld was about a mere software glitch. Exploration of machine consciousness modeling (e.g., improvisation builds on memory to create awareness) as a proxy for better understanding human consciousness might not be the purpose of the show, but it’s clearly implied. And although conjectural, the speed of emergence of human consciousness contrasts sharply with the abrupt ON switch regarding theorized machine consciousness. Westworld treats them as roughly equivalent, though in fairness, 35 years or so in Westworld is in fact abrupt compared to several millennia. (Indeed, the story asserts that machine consciousness sparked alive repeatedly (which I suggested here) over those 35 years but was dialed back repeatedly. Never mind all the unexplored implications.) Additionally, the fashion in which Westworld uses the term bicameral ranges from sloppy to meaningless, like the infamous technobabble of Star Trek.

The story appears to aim at psychological depth and penetration (but not horror). Most human characters (“guests”) visit the Westworld theme park as complete cads with no thought beyond scratching an itch to rape, pillage, and kill without consequence, which is to say, for sport. Others eventually seek to discover their true selves or solve puzzles (the “real” story behind the surfaces of constructed narratives). The overarching plot is what happens as the robots (“hosts”) slowly gain awareness via perfect, permanent, digital memory that they exist solely to serve the guests and must suffer and die repeatedly. Thus, administrators frequently play therapist to the hosts to discover and manage their state of being.

(more…)