Archive for the ‘Blogroll’ Category

Nicholas Carr has a pair of thoughtful new posts at his blog Rough Type (see blogroll) under the tag “infinite media.” The second of the two is about context collapse, restoration, and content collapse. I won’t review that particular post; I’m merely pointing to it for you to read. Carr is a journalist and media theorist whose work is especially interesting to me as a partial antidote to what I’ve been calling our epistemological crisis. In short, he offers primers on how to think about stuff, that stuff being the primary medium through which most people now gather information: via screens.

Relatedly, the other media theorist to whom I pay attention is Alan Jacobs, who has a recent book (which I read but didn’t review or blog about) called more simply How to Think. It’s about recognizing and avoiding cognitive biases on the way to more disciplined, clear thinking. I mention these two fellows together because I’ve been reading their blogs and books for over a decade now and have been curious to observe how their public interactions have changed over time. They have each embraced and abandoned various new media (particularly social media) and adopted more stringent media ecology. Carr posts ocassionally now and has closed comments at his blog (a shame, since his commentariat was valuable, quite unlike the troll mob at most sites). Jacobs is even more aggressive, starting and abandoning one blog after another (was active at multiple URLs, one formerly on my blogroll) and deleting his Twitter account entirely. Whatever goings-on occur at Facebook I can’t say; I never go there. These aren’t criticisms. We all evolve our associations and activities. But these two are unusual, perhaps, in that they evaluate and recommend with varying vehemence how to interact with electronic media tools.

The wide-open Web available to Americans (but restricted in some countries) used to be valorized as a wholly democratic, organic, grass-roots, decentralized force for good where information yearned to breathe free. Though pioneered by academic institutions, it wasn’t long before the porn industry became the first to monetize it effectively (cuz duh! that’s there the money was — at least initially) and then the whole thing was eventually overwhelmed by others with unique agendas and mechanisms, including commerce, surveillance, and propaganda. The surfeit of information demanded curation, and social media with algorithmic feeds became the default for folks either too lazy or just untrained (or uninterested) in how to think for themselves. Along the way, since a surprisingly large portion of human activity diverted to online media, that activity turned into a resource mined, harvested, and in turn monetized, much like the voting public has become a resource tracked, polled, channeled, activated, disenfranchized, corrupted, and analyzed to death.

An earlier media theorist I read with enthusiasm, Neil Postman, recommended that curricula include the study of semantics as applied to media. (Use of a word like semantics sends nonacademics running for the hills, but the recommendation is basically about thinking critically, even skeptically, regarding information, its sources, and its means of distribution.) The rise of handheld omnimedia postdates Postman, so I can only surmise that the bewildering array of information we confront absorb every day, which I liken to drinking from a fire hose, only compounds Postman’s concern that students are severely overmatched by media (especially advertising) intent on colonizing and controlling their minds. Thus, today’s information environment is a far cry from the stately slowness of earlier eras when teaching and learning (to say nothing of entertainment) were conducted primarily through reading, lecture, and discussion.

A comment came in on this blog chiding me for still blogging after 14 years. I admit hardly anyone reads anymore; they watch (or listen, as with audio-only podcasts). Preferred forms of media consumption have moved on from printed text, something USA Today recognized decades ago when it designed its print publication and sidewalk distribution boxes to look more like TVs. Nonetheless, the modest reproach reminded me of a cry in the wilderness by Timothy Burke: why he still blogs, though quite infrequently. (There’s a brokeback can’t-quit-you joke in there somewhere I’ll leave unformulated.) So this blog may indeed be past its proper expiration date, yet it remains for me one of the best means for organizing how I think about stuff. Without it, I’m afraid thoughts would be rattling loose inside my head, disorganized, only to be displaced by the next slurp from the fire hose.

Periodically, I come across preposterously stupid arguments (in person and online) I can’t even begin to dispel. One such argument is that carbon is plant food, so we needn’t worry about greenhouse gases such as carbon dioxide, a byproduct of industrial activity. Although I’m unconvinced by such arrant capsule arguments, I’m also in a lousy position to contend with them because convincing evidence lies outside my scientific expertise. Moreover, evidence (should I bother to gather it) is too complex and involved to fit within a typical conversation or simple explanation. Plus, evidence relies on scientific literacy and critical reasoning often lacking in the lay public. Scientific principles work better for me rather than, for example, the finely tuned balances Nature is constantly tinkering with — something we humans can hope to discover only partially. Yet we sally forth aggressively and heedlessly to manipulate Nature at our peril, which often results in precisely the sort of unintended consequence scientists in Brazil found when mosquitoes altered genetically (to reduce their numbers as carriers of disease) developed into mosquitoes hardier and more difficult to eradicate than if we had done nothing. The notion that trees respond favorably to increased carbon in the atmosphere has been a thorn in my side for some time. Maybe it’s even partly true; I can’t say. However, the biological and geophysical principle I adhere to is that even small changes in geochemistry (minute according to some scales, e.g., parts per million or per billion) have wildly disproportionate effects. The main effect today is climate changing so fast that many organisms can’t adapt or evolve quickly enough to keep up. Instead, they’re dying en masse and going extinct.

The best analogy is the narrow range of healthy human body temperature centered on 98.6 °F. Vary not far up (fever) or down (hypothermia) and human physiology suffers and become life threatening. Indeed, even in good health, we humans expend no small effort keeping body temperature from extending far into either margin. Earth also regulates itself through a variety of blind mechanisms that are in the process of being wrecked by human activity having risen by now to the level of terraforming, much like a keystone species alters its environment. So as the planet develops the equivalent of a fever, weather systems and climate (not the same things) react, mostly in ways that make life on the surface much harder to sustain and survive. As a result, trees are in the process of dying. Gail Zawacki’s blog At Wit’s End (on my blogroll) explores this topic in excruciating and demoralizing detail. Those who are inclined to deny offhandedly are invited to explore her blog. The taiga (boreal forest) and the Amazonian rainforest are among the most significant ecological formations and carbon sinks on the planet. Yet both are threatened biomes. Deforestation and tree die-off is widespread, of course. For example, since 2010, an estimated 129 million trees in California have died from drought and bark beetle infestation. In Colorado, an estimated more than 800 millions dead trees still standing (called snags) are essentially firestarter. To my way of thinking, the slow, merciless death of trees is no small matter, and affected habitats may eventually be relegated to sacrifice zones like areas associated with mining and oil extraction.

Like the bait “carbon is plant food,” let me suggest that the trees have begun to rebel by falling over at the propitious moment to injure and/or kill hikers and campers. According to this article at Outside Magazine, the woods are not safe. So if mosquitoes, rattlesnakes, mountain lions, or bears don’t getcha first, beware of the trees. Even broken branches and dead tree trunks that haven’t fallen fully to the ground (known as hung snags, widow-makers, and foolkillers) are known to take aim at human interlopers. This is not without precedent. In The Lord of the Rings, remember that the Ents (tree herders) went to war with Isengard, while the Huorns destroyed utterly the Orcs who had laid siege to Helm’s Deep. Tolkien’s tale is but a sliver of a much larger folklore regarding the enchanted forest, where men are lost or absorbed (as with another Tolkien character, Old Man Willow). Veneration of elemental forces of nature (symbols of both life and its inverse death) is part of our shared mythology, though muted in an era of supposed scientific sobriety. M. Night Shyamalan has weak explorations of similar themes in several of his films. Perhaps Tolkien understood at an intuitive level the silent anger and resentment of the trees, though slow to manifest, and their eventual rebellion over mistreatment by men. It’s happening again, right now, all around us. Go ahead: prove me wrong.

Caveat: Rather uncharacteristically long for me. Kudos if you have the patience for all of this.

Caught the first season of HBO’s series Westworld on DVD. I have a boyhood memory of the original film (1973) with Yul Brynner and a dim memory of its sequel Futureworld (1976). The sheer charisma of Yul Brynner in the role of the gunslinger casts a long shadow over the new production, not that most of today’s audiences have seen the original. No doubt, 45 years of technological development in film production lends the new version some distinct advantages. Visual effects are quite stunning and Utah landscapes have never been used more appealingly in terms of cinematography. Moreover, storytelling styles have changed, though it’s difficult to argue convincingly that they’re necessarily better now than then. Competing styles only appear dated. For instance, the new series has immensely more time to develop its themes; but the ancient parables of hubris and loss of control over our own creations run amok (e.g., Shelley’s Frankenstein, or more contemporaneously, the surprisingly good new movie Upgrade) have compact, appealing narrative arcs quite different from constant teasing and foreshadowing of plot developments while actual plotting proceeds glacially. Viewers wait an awful lot longer in the HBO series for resolution of tensions and emotional payoffs, by which time investment in the story lines has been dispelled. There is also no terrifying crescendo of violence and chaos demanding rescue or resolution. HBO’s Westworld often simply plods on. To wit, a not insignificant portion of the story (um, side story) is devoted to boardroom politics (yawn) regarding who actually controls the Westworld theme park. Plot twists and reveals, while mildly interesting (typically guessed by today’s cynical audiences), do not tie the narrative together successfully.

Still, Westworld provokes considerable interest from me due to my fascination with human consciousness. The initial episode builds out the fictional future world with characters speaking exposition clearly owing its inspiration to Julian Jayne’s book The Origins of Consciousness in the Breakdown of the Bicameral Mind (another reference audiences are quite unlikely to know or recognize). I’ve had the Julian Jaynes Society’s website bookmarked for years and read the book some while back; never imagined it would be captured in modern fiction. Jaynes’ thesis (if I may be so bold as to summarize radically) is that modern consciousness coalesced around the collapse of multiple voices in the head — ideas, impulses, choices, decisions — into a single stream of consciousness perhaps better understood (probably not) as the narrative self. (Aside: the multiple voices of antiquity correspond to polytheism, whereas the modern singular voice corresponds to monotheism.) Thus, modern human consciousness arose over several millennia as the bicameral mind (the divided brain having two camera, chambers, or halves) functionally collapsed. The underlying story of the new Westworld is the emergence of machine consciousness, a/k/a strong AI, a/k/a The Singularity, while the old Westworld was about a mere software glitch. Exploration of machine consciousness modeling (e.g., improvisation builds on memory to create awareness) as a proxy for better understanding human consciousness might not be the purpose of the show, but it’s clearly implied. And although conjectural, the speed of emergence of human consciousness contrasts sharply with the abrupt ON switch regarding theorized machine consciousness. Westworld treats them as roughly equivalent, though in fairness, 35 years or so in Westworld is in fact abrupt compared to several millennia. (Indeed, the story asserts that machine consciousness sparked alive repeatedly (which I suggested here) over those 35 years but was dialed back repeatedly. Never mind all the unexplored implications.) Additionally, the fashion in which Westworld uses the term bicameral ranges from sloppy to meaningless, like the infamous technobabble of Star Trek.

(more…)

Adding one, revising one. The added one is The Daily Impact, written by Tom Lewis, author of a couple books warning of the collapse of industrial civilization. Lewis appears to be a news junkie, so posts are often torn from the day’s headlines. He’s a good read and not afraid to be sardonically funny. The revised one is The Compulsive Explainer, written by Hal Smith. Blogs come and go, and I had thought that The Compulsive Explainer had stopped being updated last summer, but I see that the author merely switched from WordPress to Blogger without any indication. I suspect Smith isn’t much read (if commentary is a useful measure) but probably deserves to be, not least for his ex patriot perspective.

Because this entry is so slight, there is considerably more unrelated content beneath the fold. (more…)

According to Hal Smith of The Compulsive Explainer (see my blogroll), the tragedy of our time is, simply put, failed social engineering. Most of his blog post is quoted below:

Americans, for example, have decided to let other forces manage their nation — and not let Americans themselves manage it. At least this is what I see happening, with the election of Trump. They have handed the management of their country over to a man with a collection of wacky ideas — and they feel comfortable with this. Mismanagement is going on everywhere — and why not include the government in this?

This is typical behavior for a successful society in decline. They cannot see what made them successful, has been taken too far — and is now working against them. The sensible thing for them to do is back off for awhile, analyze their situation — and ask “What is going wrong here?” But they never do this — and a collapse ensues.

In our present case, the collapse involves a global society based on Capitalism — that cannot adapt itself to a Computer-based economy. The Software ecosystem operates differently — it is based on cooperation, not competition.

Capitalism was based on just that — Capital — money making money. And it was very successful — for those it favored. Money is still important in the Computer economy — people still have to be paid. But what they are being paid for has changed — information is now being managed, something different entirely.

Hardware is still important — but that is not where the Big Money is being made. It is now being made in Computers, and their Software.

I’m sympathetic to this view but believe that a look back through history reveals something other than missed opportunities and too-slow adaptation as we fumbled our way forward, namely, repeated catastrophic failures. Such epic fails include regional and global wars, genocides, and societal collapses that rise well above the rather bland term mismanagement. A really dour view of history, taking into account more than a few truly vicious, barbaric episodes, might regard the world as a nearly continuous stage of horrors from which we periodically take refuge, and the last of these phases is drawing quickly to a close.

The breakneck speed of technological innovation and information exchange has resulted not in Fukuyama’s mistakenly exuberant “end of history” (kinda-sorta winning the Cold War but nevertheless losing the peace?) but instead an epoch where humans are frankly left behind by follow-on effects of their own unrestrained restlessness. Further, if history is a stage of horrors, then geopolitics is theater of the absurd. News reports throughout the new U.S. presidential administration, still less than 6 months in (more precisely, 161 days or 23 weeks), tell of massive economic and geopolitical instabilities threatening to collapse the house of cards with only a slight breeze. Contortions press agents and politicized news organs go through to provide cover for tweets, lies, and inanities emanating from the disturbed mind of 45 are carnival freak show acts. Admittedly, not much has changed over the previous two administrations — alterations of degree only, not kind — except perhaps to demonstrate beyond any shadow of doubt that our elected, appointed, and purchased leaders (acknowledging many paths to power) are fundamentally incompetent to deal effectively with human affairs, much less enact social engineering projects beyond the false happiness of Facebook algorithms that hide bad news. Look no further than the egregious awfulness of both presidential candidates in the last election coughed up like hairballs from the mouths of their respective parties. The aftermath of those institutional failures finds both major parties in shambles, further degraded than their already deplorable states prior to the election.

So how much worse can things get? Well, scary as it sounds, lots. The electrical grid is still working, water is still flowing to the taps, and supply lines continue to keep store shelves stocked with booze and brats for extravagant holiday celebrations. More importantly, we in the U.S. have (for now, unlike Europe) avoided repetition of any major terrorist attacks. But everyone with an honest ear to the ground recognizes our current condition as the proverbial calm before the storm. For instance, we’re threatened by the first ice-free Arctic in the history of mankind later this year and a giant cleaving off of the Larsen C Ice Shelf in Antarctica within days. In addition, drought in the Dakotas will result in a failed wheat harvest. Guy McPherson (in particular, may well be others) has been predicting for years that abrupt, nonlinear climate change when the poles warm will end the ability to grow grain at scale, leading to worldwide famine, collapse, and near-term extinction. Seems like we’re passing the knee of the curve. Makes concerns about maladaptation and failed social engineering pale by comparison.

I pause periodically to contemplate deep time, ancient history, and other subjects that lie beyond most human conceptual abilities. Sure, we sorta get the idea of a very long ago past out there in the recesses or on the margins, just like we get the idea of U.S. sovereign debt now approaching $20 trillion. Problem is, numbers lose coherence when they mount up too high. Scales differ widely with respect to time and currency. Thus, we can still think reasonably about human history back to roughly 6,000 years ago, but 20,000 years ago or more draws a blank. We can also think about how $1 million might have utility, but $1 billion and $1 trillion are phantoms that appear only on ledgers and contracts and in the news (typically mergers and acquisitions). If deep time or deep debt feel like they don’t exist except as conceptual categories, try wrapping your head around the deep state , which in the U.S. is understood to be a surprisingly large rogue’s gallery of plutocrats, kleptocrats, and oligarchs drawn from the military-industrial-corporate complex, the intelligence community, and Wall Street. It exists but does so far enough outside the frame of reference most of us share that it effectively functions in the shadow of daylight where it can’t be seen for all the glare. Players are plain enough to the eye as they board their private jets to attend annual meetings of the World Economic Forum in Davos-Klosters, Switzerland, or two years ago the Jackson Hole [Economic] Summit in Jackson Hole, WY, in connection with the American Principles Project, whatever that is. They also enjoy plausible deniability precisely because most of us don’t really believe self-appointed masters of the universe can or should exist.

Another example of a really bad trip down the rabbit hole, what I might call deep cynicism (and a place I rarely allow myself to go), appeared earlier this month at Gin and Tacos (on my blogroll):

The way they [conservatives] see it, half the kids coming out of public schools today are basically illiterate. To them, this is fine. We have enough competition for the kinds of jobs a college degree is supposed to qualify one for as it is. Our options are to pump a ton of money into public schools and maybe see some incremental improvement in outcomes, or we can just create a system that selects out the half-decent students for a real education and future and then warehouse the rest until they’re no longer minors and they’re ready for the prison-poverty-violence cycle [add military] to Hoover them up. Vouchers and Charter Schools are not, to the conservative mind, a better way to educate kids well. They are a cheaper way to educate them poorly. What matters is that it costs less to people like six-figure income earners and home owners. Those people can afford to send their kids to a decent school anyway. Public education, to their way of thinking, used to be about educating people just enough that they could provide blue collar or service industry labor. Now that we have too much of that, a public high school is just a waiting room for prison. So why throw money into it? They don’t think education “works” anyway; people are born Good or Bad, Talented or Useless. So it only makes sense to find the cheapest possible way to process the students who were written off before they reached middle school. If charter schools manage to save 1% of them, great. If not, well, then they’re no worse than public schools. And they’re cheaper! Did I mention that they’re cheaper?

There’s more. I provided only the main paragraph. I wish I could reveal that the author is being arch or ironic, but there is no evidence of that. I also wish I could refute him, but there is similarly no useful evidence for that. Rather, the explanation he provides is a reality check that fits the experience of wide swaths of the American public, namely, that “public high school is just a waiting room for prison” (soon and again, debtor’s prison) and that it’s designed to be just that because it’s cheaper than actually educating people. Those truly interesting in being educated will take care of it themselves. Plus, there’s additional money to be made operating prisons.

Deep cynicism is a sort of radical awareness that stares balefully at the truth and refuses to blink or pretend. A psychologist might call it the reality principle; a scientist might aver that it relies unflinchingly on objective evidence; a philosopher might call it strict epistemology. To get through life, however, most of us deny abundant evidence presented to us daily in favor of dreams and fantasies that assemble into the dominant paradigm. That paradigm includes the notions that evil doesn’t really exist, that we’re basically good people who care about each other, and that our opportunities and fates are not, on the whole, established long before we begin the journey.

Stray links build up over time without my being able to handle them adequately, so I have for some time wanted a way of purging them. I am aware of other bloggers who curate and aggregate links with short commentaries quite well, but I have difficulty making my remarks pithy and punchy. That said, here are a few that I’m ready to purge in this first attempt to dispose of a few links from by backlog.

Skyfarm Fantasies

Futurists have offered myriad visions of technologies that have no hope of being implemented, from flying cars to 5-hour workweeks to space elevators. The newest pipe dream is the Urban Skyfarm, a roughly 30-story tree-like structure with 24 acres of space using solar panels and hydroponics to grow food close to the point of consumption. Utopian engineering such as this crops up frequently (pun intended) and may be fun to contemplate, but in the U.S. at least, we can’t even build high-speed rail, and that technology is already well established elsewhere. I suppose that’s why cities such as Seoul and Singapore, straining to make everything vertical for lack of horizontal space, are the logical test sites.

Leaving Nashville

The City of Nashville is using public funds to buy homeless people bus tickets to leave town and go be poor somewhere else. Media spin is that the city is “helping people in need,” but it’s obviously a NIMBY response to a social problem city officials and residents (not everyone, but enough) would rather not have to address more humanely. How long before cities begin completing with each other in numbers of people they can ship off to other cities? Call it the circle of life when the homeless start gaming the programs, revisiting multiple cities in an endless circuit.

Revisioneering

Over at Rough Type, Nick Carr points to an article in The Nation entitled “Instagram and the Fantasy of of Mastery,” which argues that a variety of technologies now give “artists” the illusion of skill, merit, and vision by enabling work to be easily executed using prefab templates and stylistic filters. For instance, in pop music, the industry standard is to auto-tune everyone’s singing to hide imperfections. Carr’s summary probably is better than the article itself and shows us the logical endpoint of production art in various media undertaken without the difficult work necessary to develop true mastery.

Too Poor to Shop

The NY Post reported over the summer that many Americans are too poor to shop except for necessities. Here are the first two paragraphs:

Retailers have blamed the weather, slow job growth and millennials for their poor results this past year, but a new study claims that more than 20 percent of Americans are simply too poor to shop.

These 26 million Americans are juggling two to three jobs, earning just around $27,000 a year and supporting two to four children — and exist largely under the radar, according to America’s Research Group, which has been tracking consumer shopping trends since 1979.

Current population in the U.S. is around 325 million. Twenty percent of that number is 65 million; twenty-six million is 8 percent. Pretty basic math, but I guess NY Post is not to be trusted to report even simple things accurately. Maybe it’s 20% of U.S. households. I dunno and can’t be bothered to check. Either way, that’s a pretty damning statistic considering the U.S. stock market continues to set new all-time highs — an economic recovery not shared with average Americans. Indeed, here are a few additional newsbits and links stolen ruthlessly from theeconomiccollapseblog.com:

  • The number of Americans that are living in concentrated areas of high poverty has doubled since the year 2000.
  • In 2007, about one out of every eight children in America was on food stamps. Today, that number is one out of every five.
  • 46 million Americans use food banks each year, and lines start forming at some U.S. food banks as early as 6:30 in the morning because people want to get something before the food supplies run out.
  • The number of homeless children in the U.S. has increased by 60 percent over the past six years.
  • According to Poverty USA, 1.6 million American children slept in a homeless shelter or some other form of emergency housing last year.

For further context, theeconomiccollapseblog also points to “The Secret Shame of Middle Class Americans” in The Atlantic, which reports, among other things, that fully 47% of Americans would struggle to scrape together a mere $400 in an emergency.

How do such folks respond to the national shopping frenzy kicking off in a few days with Black Friday, Small Business Saturday, Charitable Sunday, and Cyber Monday? I suggest everyone stay home.

See this exchange where Neil deGrasse Tyson chides Sam Harris for failing to speak to his audience in terms it understands:

The upshot is that lay audiences simply don’t subscribe to or possess the logical, rational, abstract style of discourse favored by Harris. Thus, Harris stands accused of talking past his audience — at least somewhat — especially if his audience is understood to be the general public rather than other well-educated professionals. Subject matter is less important than style but revolves around politics, and worse, identity politics. Everyone has abundant opinions about those, whether informed by rational analysis or merely fed by emotion and personal resonance.

The lesson deGrasse Tyson delivers is both instructive and accurate yet also demands that the level of discourse be lowered to a common denominator (like the reputed 9th-grade speech adopted by the evening news) that regrettably forestalls useful discussion. For his part (briefly, at the end), Harris takes the lesson and does not resort to academic elitism, which would be obvious and easy. Kudos to both, I guess, though I struggle (being somewhat an elitist); the style-over-substance argument really goes against the grain for me. Enhancements to style obviously work, and great communicators use them and are convincing as a result. (I distinctly recall Al Gore looking too much like a rock star in An Inconvenient Truth. Maybe it backfired. I tend to think that style could not overcome other blocks to substance on that particular issue.) Slick style also allows those with nefarious agendas to hoodwink the public into believing nonsense.

(more…)

After some delay, I picked up Nick Carr’s latest book The Glass Cage to read (see link to Carr’s blog Rough Type on my blogroll at left). Carr is an exceptionally clear thinker and lays out his arguments both for and against technology very well. Like my blog about Michael Crawford’s book, I won’t get too involved blogging about The Glass Cage, which discusses deskilling among other things. However, my reading of his discussion of self-driving cars (and autopilot on airplanes) and the attendant loss of the driver’s and pilot’s skill and focus coincided with something I read elsewhere, namely, that while self-driving cars may free the driver of some attentional burdens (not really burdens upon closer inspection), they are likely to cause increased congestion precisely because self-driving cars would no longer require passengers. Thus, an owner could potentially instruct the car to drive back home from work in the morning and then to come back and pick him or her up in the evening, handily doubling the time and distance the car is on the road. Similarly, a driver could avoid paying parking fees in pricey downtown precincts by instructing the vehicle to circle the block while the owner is out of the car shopping or dining. These are workarounds that can be fully anticipated and perhaps limited, but there will undoubtedly be others not so easily anticipated.

Carr argues that technology has enabled some (e.g., for those who designed their own software) to profit disproportionately from their effort. This is especially true of wikis and social media sites that run on user-generated content. It’s impossible to establish whether that’s laudable innovation, a questionable workaround, or simply gaming the system. Either way, redesigning workflows and and information flows carries the unintended consequence of creating perverse incentives, and one can be certain than in a hustling society such as ours, many someones are going to discover ways to exploits loopholes. This is already the case with the legal system, the financial system, social media, and journalism, and it seems ubiquitous with education and sports, where cheating is only a problem if one gets caught.

Perverse incentives don’t arise solely from rapid, destabilizing technological change, though that’s frequently a trigger. What’s worse, perhaps, is when such perversity is normalized. For instance, politics now operates under a perverse funding regime that awards disproportionate influence to deep pockets while creating no incentive for participants (politicians or deep pockets) to seek reform. Similarly, pooling wealth, and with it political power, within an extremely small segment of society carries no incentive for the ultrarich to plow their riches back into society at large. A few newly philanthropic individuals don’t convince me that, in the current day and age, any high-minded idealism is at work. Rather, it’s more plausible that the work of figuring out things to do with one’s money is more interesting, to a few at least, than merely hoarding it. A better incentive, such as shame, does not yet exist. So the ultrarich are effectively circling the block, clogging things up for no better reason than that they can.

Updates to my blogroll are infrequent. I only add blogs that present interesting ideas (with which I don’t always agree) and/or admirable writing. Deletions are typically the result of a change of focus at the linked blog, or regrettably, the result of a blogger becoming abusive or self-absorbed. This time, it’s latter. So alas, another one bites the dust. Dropping off my blogroll — no loss since almost no one reads my blog — is On an Overgrown Path (no link), which is about classical music.

My indignation isn’t about disagreements (we’ve had a few); it’s about inviting discussion in bad faith. I’m very interested in contributing to discussion and don’t mind moderated comments to contend with trolls. However, my comments drive at ideas, not authors, and I’m scarcely a troll. Here’s the disingenuously titled blog post, “Let’s Start a Conversation about Concert Hall Sound,” where the blogger declined to publish my comment, handily blocking conversation. So for maybe the second time in the nearly 10-year history of this blog, I am reproducing the entirety of another’s blog post (minus the profusion of links, since that blogger tends to create link mazes, defying readers to actually explore) followed by my unpublished comment, and then I’ll expound and perhaps rant a bit. Apologies for the uncharacteristic length. (more…)