Archive for the ‘Science’ Category

As I read into Fingerprints of the Gods by Graham Hancock and learn more about antiquity, it becomes clear that weather conditions on Earth were far more hostile then (say, 15,000 years ago) than now. Looking way, way back into millions of years ago, scientists have plotted global average temperature and atmospheric carbon, mostly using ice cores as I understand it, yielding this graph:

co2-levels-over-time1

I’ve seen this graph before, which is often used by climate change deniers to show a lack of correlation between carbon and temperature. That’s not what concerns me. Instead, the amazing thing is how temperature careens up and down quickly (in geological time) between two limits, 12°C and 22°C, and forms steady states known at Ice Age Earth and Hot House Earth. According to the graph, we’re close to the lower limit. It’s worth noting that because of the extremely long timescale, the graph is considerably smoothed.

(more…)

This past Thursday was an occasion of protest for many immigrant laborers who did not show up to work. Presumably, this action was in response to recent executive attacks on immigrants and hoped to demonstrate how businesses would suffer without immigrant labor doing jobs Americans frequently do not want. Tensions between the ownership and laboring classes have a long, tawdry history I cannot begin to summarize. As with other contextual failures, I daresay the general public believes incorrectly that such conflicts date from the 19th century when formal sociopolitical theories like Marxism were published, which intersect heavily with labor economics. An only slightly better understanding is that the labor movement commenced in the United Kingdom some fifty years after the Industrial Revolution began, such as with the Luddites. I pause to remind that the most basic, enduring, and abhorrent labor relationship, extending back millennia, is slavery, which ended in the U.S. only 152 years ago but continues even today in slightly revised forms around the globe.

Thursday’s work stoppage was a faint echo of general strikes and unionism from the middle of the 20th century. Gains in wages and benefits, working conditions, and negotiating position transferred some power from owners to laborers during that period, but today, laborers must sense they are back on their heels, defending conditions fought for by their grandparents but ultimately losing considerable ground. Of course, I’m sympathetic to labor, considering I’m not in the ownership class. (It’s all about perspective.) I must also admit, however, to once quitting a job after only one day that was simply too, well, laborious. I had that option at the time, though it ultimately led nearly to bankruptcy for me — a life lesson that continues to inform my attitudes. As I survey the scene today, however, I suspect many laborers — immigrants and native-born Americans alike — have the unenviable choice of accepting difficult, strenuous labor for low pay or being unemployed. Gradual reduction of demand for labor has two main causes: globalization and automation.

(more…)

Stray links build up over time without my being able to handle them adequately, so I have for some time wanted a way of purging them. I am aware of other bloggers who curate and aggregate links with short commentaries quite well, but I have difficulty making my remarks pithy and punchy. That said, here are a few that I’m ready to purge in this first attempt to dispose of a few links from by backlog.

Skyfarm Fantasies

Futurists have offered myriad visions of technologies that have no hope of being implemented, from flying cars to 5-hour workweeks to space elevators. The newest pipe dream is the Urban Skyfarm, a roughly 30-story tree-like structure with 24 acres of space using solar panels and hydroponics to grow food close to the point of consumption. Utopian engineering such as this crops up frequently (pun intended) and may be fun to contemplate, but in the U.S. at least, we can’t even build high-speed rail, and that technology is already well established elsewhere. I suppose that’s why cities such as Seoul and Singapore, straining to make everything vertical for lack of horizontal space, are the logical test sites.

Leaving Nashville

The City of Nashville is using public funds to buy homeless people bus tickets to leave town and go be poor somewhere else. Media spin is that the city is “helping people in need,” but it’s obviously a NIMBY response to a social problem city officials and residents (not everyone, but enough) would rather not have to address more humanely. How long before cities begin completing with each other in numbers of people they can ship off to other cities? Call it the circle of life when the homeless start gaming the programs, revisiting multiple cities in an endless circuit.

Revisioneering

Over at Rough Type, Nick Carr points to an article in The Nation entitled “Instagram and the Fantasy of of Mastery,” which argues that a variety of technologies now give “artists” the illusion of skill, merit, and vision by enabling work to be easily executed using prefab templates and stylistic filters. For instance, in pop music, the industry standard is to auto-tune everyone’s singing to hide imperfections. Carr’s summary probably is better than the article itself and shows us the logical endpoint of production art in various media undertaken without the difficult work necessary to develop true mastery.

Too Poor to Shop

The NY Post reported over the summer that many Americans are too poor to shop except for necessities. Here are the first two paragraphs:

Retailers have blamed the weather, slow job growth and millennials for their poor results this past year, but a new study claims that more than 20 percent of Americans are simply too poor to shop.

These 26 million Americans are juggling two to three jobs, earning just around $27,000 a year and supporting two to four children — and exist largely under the radar, according to America’s Research Group, which has been tracking consumer shopping trends since 1979.

Current population in the U.S. is around 325 million. Twenty percent of that number is 65 million; twenty-six million is 8 percent. Pretty basic math, but I guess NY Post is not to be trusted to report even simple things accurately. Maybe it’s 20% of U.S. households. I dunno and can’t be bothered to check. Either way, that’s a pretty damning statistic considering the U.S. stock market continues to set new all-time highs — an economic recovery not shared with average Americans. Indeed, here are a few additional newsbits and links stolen ruthlessly from theeconomiccollapseblog.com:

  • The number of Americans that are living in concentrated areas of high poverty has doubled since the year 2000.
  • In 2007, about one out of every eight children in America was on food stamps. Today, that number is one out of every five.
  • 46 million Americans use food banks each year, and lines start forming at some U.S. food banks as early as 6:30 in the morning because people want to get something before the food supplies run out.
  • The number of homeless children in the U.S. has increased by 60 percent over the past six years.
  • According to Poverty USA, 1.6 million American children slept in a homeless shelter or some other form of emergency housing last year.

For further context, theeconomiccollapseblog also points to “The Secret Shame of Middle Class Americans” in The Atlantic, which reports, among other things, that fully 47% of Americans would struggle to scrape together a mere $400 in an emergency.

How do such folks respond to the national shopping frenzy kicking off in a few days with Black Friday, Small Business Saturday, Charitable Sunday, and Cyber Monday? I suggest everyone stay home.

Caveat: Apologies for this overlong post, which random visitors (nearly the only kind I have besides the spambots) may find rather challenging.

The puzzle of consciousness, mind, identity, self, psyche, soul, etc. is an extraordinarily fascinating subject. We use various terms, but they all revolve around a unitary property and yet come from different approaches, methodologies, and philosophies. The term mind is probably the most generic; I tend to use consciousness interchangeably and more often. Scientific American has a entire section of its website devoted to the mind, with subsections on Behavior & Society, Cognition, Mental Health, Neurological Health, and Neuroscience. (Top-level navigation offers links to these sections: The Sciences, Mind, Health, Tech, Sustainability, Education, Video, Podcasts, Blogs, and Store.) I doubt I will explore very deeply because science favors the materialist approach, which I believe misses the forest through the trees. However, the presence of this area of inquiry right at the top of the page indicates how much attention and research the mind/consciousness is currently receiving.

A guest blog at Scientific American by Adam Bear entitled “What Neuroscience Says about Free Will” makes the fashionable argument (these days) that free will doesn’t exist. The blog/article is disclaimed: “The views expressed are those of the author(s) and are not necessarily those of Scientific American.” I find that a little weaselly. Because the subject is still wide open to interpretation and debate, Scientific American should simply offer conflicting points of view without worry. Bear’s arguments rest on the mind’s ability to revise and redate experience occurring within the frame of a few milliseconds to allow for processing time, also known as the postdictive illusion (the opposite of predictive). I wrote about this topic more than four years ago here. Yet another discussion is found here. I admit to being irritated that the questions and conclusions stem from a series of assumptions, primarily that whatever free will is must occur solely in consciousness (whatever that is) as opposed to originating in the subconscious and subsequently transferring into consciousness. Admittedly, we use these two categories — consciousness and the subconscious — to account for the rather limited amount of processing that makes it all the way into awareness vs. the significant amount that remains hidden or submerged. A secondary assumption, the broader project of neuroscience in fact, is that, like free will, consciousness is housed somewhere in the brain or its categorical functions. Thus, fruitful inquiry results from seeking its root, seed, or seat as though the narrative constructed by the mind, the stream of consciousness, were on display to an inner observer or imp in what Daniel Dennett years ago called the Cartesian Theater. That time-worn conceit is the so-called ghost in the machine. (more…)

The last time I blogged about this topic, I took an historical approach, locating the problem (roughly) in time and place. In response to recent blog entries by Dave Pollard at How to Save the World, I’ve delved into the topic again. My comments at his site are the length of most of my own blog entries (3–4 paras.), whereas Dave tends to write in chapter form. I’ve condensed to my self-imposed limit.

Like culture and history, consciousness is a moving train that yields its secrets long after it has passed. Thus, assessing our current position is largely conjectural. Still, I’ll be reckless enough to offer my intuitions for consideration. Dave has been pursuing radical nonduality, a mode of thought characterized by losing one’s sense of self and becoming selfless, which diverges markedly from ego consciousness. That mental posture, described elsewhere by nameless others as participating consciousness, is believed to be what preceded the modern mind. I commented that losing oneself in intense, consuming flow behaviors is commonplace but temporary, a familiar, even transcendent place we can only visit. Its appeals are extremely seductive, however, and many people want to be there full-time, as we once were. The problem is that ego consciousness is remarkably resilient and self-reinforcing. Despite losing oneself from time to time, we can’t be liberated from the self permanently, and pathways to even temporarily getting out of one’s own head are elusive and sometimes self-destructive.

My intuition is that we are fumbling toward just such a quieting of the mind, a new dark age if you will, or what I called self-lite in my discussion with Dave. As we stagger forth, groping blindly in the dark, the transitional phase is characterized by numerous disturbances to the psyche — a crisis of consciousness wholly different from the historical one described previously. The example uppermost in my thinking is people lost down the rabbit hole of their handheld devices and desensitized to the world beyond the screen. Another is the ruined, wasted minds of (arguably) two or more generations of students done great disservice by their parents and educational institutions at all levels, a critical mass of intellectually stunted and distracted young adults by now. Yet another is those radicalized by their close identification with one or more special interest groups, also known as identity politics. A further example is the growing prevalence of confusion surrounding sexual orientation and gender identity. In each example, the individual’s ego is confused, partially suppressed, and/or under attack. Science fiction and horror genres have plenty of instructive examples of people who are no longer fully themselves, their bodies zombified or made into hosts for another entity that takes up residence, commandeering or shunting aside the authentic, original self.

Despite having identified modern ego consciousness as a crisis and feeling no small amount of empathy for those seeking radical nonduality, I find myself in the odd position of defending the modern mind precisely because transitional forms, if I have understood them properly, are so abhorrent. Put another way, while I can see the potential value and allure of extinguishing the self even semi-permanently, I will not be an early adopter. Indeed, if the modern mind took millennia to develop as one of the primary evolutionary characteristics of homo sapiens sapiens, it seems foolish to presume that it can be uploaded into a computer, purposely discarded by an act of will, or devolved in even a few generations. Meanwhile, though the doomer in me recognizes that ego consciousness is partly responsible for bringing us to the brink of (self-)annihilation (financial, geopolitical, ecological), individuality and intelligence are still highly prized where they can be found.

Every blog post I write suffers from the same basic problem: drawing disparate ideas together in succinct, cogent form that expresses enough of the thesis to make sense while leaving room for commentary, discussion, and development. Alas, commentary and discussion are nearly nonexistent, but that’s always been my expectation and experience given my subjects. When expanding a blog into several parts, the greatest risk is that ideas fail to coalesce legibly, compounded by the unlikelihood that readers who happen to navigate here will bother to read all the parts. (I suspect this is due in part to most readers’ inability to comprehend complex, multipart writing, as discussed in this blog post by Ugo Bardi describing surprising levels of functional illiteracy.) So this addendum to my three-part blog on Dissolving Reality is doomed, like the rest of my blog, to go unread and ignored. Plus ça change

Have you had the experience of buying a new model of vehicle and suddenly noticed other vehicles of the same model on the road? That’s what I’ve been noticing since I hatched my thesis (noting with habitual resignation that there nothing is new under the sun), which is that the debased information environment now admits multiple interpretations of reality, none of which can lay exclusive claim to authority as an accurate account. Reality has instead dissolved into a stew of competing arguments, often extremely politicized, which typically appeal to emotion. Historically, the principal conflict was between different ways of knowing exemplified by faith and reason, perhaps better understood as the church (in the West, the Catholic Church) vs. science. Floodgates have now opened to any wild interpretation one might concoct, all of which coexist on roughly equal footing in the marketplace of ideas. (more…)

At last, getting to my much, much delayed final book blogs (three parts) on Iain McGilchrist’s The Master and His Emissary. The book came out in 2010, I picked it up in 2012 (as memory serves), and it took me nearly two years to read its entirety, during which time I blogged my observations. I knew at the time of my previous post on the book that there would be more to say, and it’s taken considerable time to get back to it.

McGilchrist ends with a withering criticism of the Modern and Postmodern (PoMo) Eras, which I characterized as an account of how the world went mad. That still seems accurate to me: the madness that overtook us in the Modern Era led to world wars, genocides, and systematic reduction of humanity to mere material and mechanism, what Ortega y Gasset called Mass Man. Reduction of the rest of the living world to resources to be harvested and exploited by us is a worldview often called instrumental reality. From my armchair, I sense that our societal madness has shape-shifted a few times since the fin de siècle 1880s and 90s. Let’s start with quotes from McGilchrist before I extend into my own analysis. Here is one of his many descriptions of the left-hemisphere paradigm under which we now operate:

In his book on the subject, Modernity and Self-identity, Anthony Giddens describes the characteristic disruption of space and time required by globalisation, itself the necessary consequence of industrial capitalism, which destroys the sense of belonging, and ultimately of individual identity. He refers to what he calls ‘disembedding mechanisms’, the effect of which is to separate things from their context, and ourselves from the uniqueness of place, what he calls ‘locale’. Real things and experiences are replaced by symbolic tokens; ‘expert’ systems replace local know-how and skill with a centralised process dependent on rules. He sees a dangerous form of positive feedback, whereby theoretical positions, once promulgated, dictate the reality that comes about, since they are then fed back to us through the media, which form, as much as reflect, reality. The media also promote fragmentation by a random juxtaposition of items of information, as well as permitting the ‘intrusion of distant events into everyday consciousness’, another aspect of decontextualisation in modern life adding to loss of meaning in the experienced world. [p. 390]

Reliance on abstract, decontextualized tokens having only figurative, nonintrinsic power and meaning is a specific sort of distancing, isolation, and reduction that describes much of modern life and shares many characteristics with schizophrenia, as McGilchrist points out throughout the chapter. That was the first shape-shift of our madness: full-blown mechanization borne out of reductionism and materialism, perspectives bequeathed to us by science. The slow process had been underway since the invention of the mechanical clock and discovery of heliocentrism, but it gained steam (pun intended) as the Industrial Revolution matured in the late 19th century.

The PoMo Era is recognized as having begun just after the middle of the 20th century, though its attributes are questionably defined or understood. That said, the most damning criticism leveled at PoMo is its hall-of-mirrors effect that renders objects in the mirrors meaningless because the original reference point is obscured or lost. McGilchrist also refers repeatedly to loss of meaning resulting from the ironizing effect of left-brain dominance. The corresponding academic fad was PoMo literary criticism (deconstruction) in the 1970s, but it had antecedents in quantum theory. Here is McGilchrist on PoMo:

With post-modernism, meaning drains away. Art becomes a game in which the emptiness of a wholly insubstantial world, in which there is nothing beyond the set of terms we have in vain used to ‘construct’ mean, is allowed to speak for its own vacuity. The set of terms are now seen simply to refer to themselves. They have lost transparency; and all conditions that would yield meaning have been ironized out of existence. [pp. 422–423]

This was the second shape-shift: loss of meaning in the middle of the 20th century as purely theoretical formulations, which is to say, abstraction, gained adherents. He goes on:

Over-awareness … alienates us from the world and leads to a belief that only we, or our thought processes, are real … The detached, unmoving, unmoved observer feels that the world loses reality, becomes merely ‘things seen’. Attention is focussed on the field of consciousness itself, not on the world beyond, and we seem to experience experience … [In hyperconsciousness, elements] of the self and of experience which normally remain, and need to remain, intuitive, unconscious, become the objects of a detached, alienating attention, the levels of consciousness multiply, so that there is an awareness of one’s own awareness, and so on. The result of this is a sort of paralysis, in which even everyday ‘automatic’ actions such as moving one leg in front of another in order to walk can become problematic … The effect of hyperconsciousness is to produce a flight from the body and from its attendant emotions. [pp. 394–396]

Having devoted a fair amount of my intellectual life to trying to understand consciousness, I immediately recognized the discussion of hyperconsciousness (derived from Louis Sass) as what I often call recursion error, where consciousness becomes the object of its own contemplation, with obvious consequences. Modern, first-world people all suffer from this effect to varying degrees because that is how modern consciousness is warped shaped.

I believe we can observe now two more characteristic extensions or variations of our madness, probably overlapping, not discrete, following closely on each other: the Ironic and Post-Ironic. The characteristics are these:

  • Modern — reductive, mechanistic, instrumental interpretation of reality
  • Postmodern — self-referential (recursive) and meaningless reality
  • Ironic — reversed reality
  • Post-Ironic — multiplicity of competing meanings/narratives, multiple realities

All this is quite enough to the chew on for a start. I plan to continue in pts. 2 and 3 with description of the Ironic and Post-Ironic.

“Any sufficiently advanced technology is indistinguishable from magic.” –Arthur C. Clarke

/rant on

Jon Evans at TechCrunch has an idiot opinion article titled “Technology Is Magic, Just Ask The Washington Post” that has gotten under my skin. His risible assertion that the WaPo editorial board uses magical thinking misframes the issue whether police and other security agencies ought to have backdoor or golden-key access to end-users’ communications carried over electronic networks. He marshals a few experts in the field of encryption and information security (shortened to “infosec” — my, how hep) who insist that even if such a thing (security that is porous to select people or agencies only) were possible, that demand is incompatible with the whole idea of security and indeed privacy. The whole business strikes me as a straw man argument. Here is Evans’ final paragraph:

If you don’t understand how technology works — especially a technical subgenre as complex and dense as encryption and information security — then don’t write about it. Don’t even have an opinion about what is and isn’t possible; just accept that you don’t know. But if you must opine, then please, at least don’t pretend technology is magic. That attitude isn’t just wrong, it’s actually dangerous.

Evans is pushing on a string, making the issue seem as though agencies that simply want what they want believe in turn that those things come into existence by the snap of one’s fingers, or magically. But in reality beyond hyperbole, absolutely no one believes that science and technology are magic. Rather, software and human-engineered tools are plainly understood as mechanisms we design and fabricate through our own effort even if we don’t understand the complexity of the mechanism under the hood. Further, everyone beyond the age of 5 or 6 loses faith in magical entities such as the Tooth Fairy, unicorns, Fairy God Mothers, etc. at about the same time that Santa Claus is revealed to be a cruel hoax. A sizable segment of the population for whom the Reality Principle takes firm root goes on to lose faith in progress, humanity, religion, and god (which version becomes irrelevant at that point). Ironically, the typically unchallenged thinking that technology delivers, among other things, knowledge, productivity, leisure, and other wholly salutary effects — the very thinking a writer for TechCrunch might exhibit — falls under the same category.

Who are these magical creatures who believe their smartphones, laptops, TVs, vehicles, etc. are themselves magical simply because their now routine operations lie beyond the typical end-user’s technical knowledge? And who besides Arthur C. Clarke is prone to calling out the bogus substitution of magic for mechanism besides ideologues? No one, really. Jon Evans does no one any favors by raising this argument — presumably just to puncture it.

If one were to observe how people actually use the technology now available in, say, handheld devices with 24/7/365 connection to the Internet (so long as the batteries hold out, anyway), it’s not the device that seems magical but the feeling of being connected, knowledgeable, and at the center of activity, with a constant barrage of information (noise, mostly) barreling at them and defying them to turn attention away lest something important be missed. People are so dialed into their devices, they often lose touch with reality, much like the politicians who no longer relate to or empathize with voters, preferring to live in their heads with all the chatter, noise, news, and entertainment fed to them like an endorphin drip. Who cares how the mechanism delivers, so long as supply is maintained? Similarly, who cares how vaporware delivers unjust access? Just give us what we want! Evans would do better to argue against the unjust desire for circumvention of security rather than its presumed magical mechanism. But I guess that idea wouldn’t occur to a technophiliac.

/rant off

The comic below alerted me some time ago to the existence of Vaclav Smil, whose professional activity includes nothing less than inventorying the planet’s flora and fauna.

Although the comic (more infographic, really, since it’s not especially humorous) references Smil’s book The Earth’s Biosphere: Evolution, Dynamics, and Change (2003), I picked up instead Harvesting the Biosphere: What We Have Taken from Nature (2013), which has a somewhat more provocative title. Smil observes early in the book that mankind has had a profound, some would even say geological, impact on the planet:

Human harvesting of the biosphere has transformed landscapes on vast scales, altered the radiative properties of the planet, impoverished as well as improved soils, reduced biodiversity as it exterminated many species and drove others to a marginal existence, affected water supply and nutrient cycling, released trace gases and particulates into the atmosphere, and played an important role in climate change. These harvests started with our hominin ancestors hundreds of thousands of years ago, intensified during the era of Pleistocene hunters, assumed entirely new forms with the adoption of sedentary life ways, and during the past two centuries transformed into global endeavors of unprecedented scale and intensity. [p. 3]

Smil’s work is essentially a gargantuan accounting task: measuring the largest possible amounts of biological material (biomass) in both their current state and then across millennia of history in order to observe and plot trends. In doing so, Smil admits that accounts are based on far-from-perfect estimates and contain wide margins of error. Some of the difficulty owes to lack of methodological consensus among scientists involved in these endeavors as to what counts, how certain entries should be categorized, and what units of measure are best. For instance, since biomass contains considerable amounts of water (percentages vary by type of organism), inventories are often expressed in terms of fresh or live weight (phytomass and zoomass, respectively) but then converted to dry weight and converted again to biomass carbon.

(more…)

Nafeez Ahmed has published a two-part article at Motherboard entitled “The End of Endless Growth” (see part 1 and part 2). Commentary there is, as usual, pretty nasty, so I only skimmed and won’t discuss it. Ahmed’s first part says that things are coming to their useful ends after an already extended period of decline, but the second argues instead that we’re already in the midst of a phase shift as (nothing less than) civilization transforms itself, presumably into something better. Ahmed can apparently already see the end of the end (at the start of a new year, natch). In part 1, he highlights primarily the work of one economist, Mauro Bonaiuti of the University of Turin (Italy), despite Bonaiuti standing on the shoulders of numerous scientists far better equipped to read the tea leaves, diagnose, and prognosticate. Ahmed (via Bonaiuti) acknowledges that crisis is upon us:

It’s the New Year, and the global economic crisis is still going strong. But while pundits cross words over whether 2015 holds greater likelihood of a recovery or a renewed recession, new research suggests they all may be missing the bigger picture: that the economic crisis is symptomatic of a deeper crisis of industrial civilization’s relationship with nature.

“Civilization’s relationship with nature” is precisely what Ahmed misunderstands throughout the two articles. His discussion of declining EROEI and exponential increases in population, resource extraction and consumption, energy use and CO2 emissions, and species extinction are good starting points, but he connects the wrong dots. He cites Bonauiti’s conclusion that “endless growth on a finite planet is simply biophysically impossible, literally a violation of one of the most elementary laws of physics: conservation of energy, and, relatedly, entropy.” Yet he fails to understand what that means beyond the forced opportunity to reset, adapt, and reorganize according to different social models.

At no point does Ahmed mention the rather obvious scenario where many billions of people die from lack of clean water, food, and shelter when industrial civilization grinds to a halt — all this before we have time to complete our phase shift. At no point does Ahmed mention the likelihood of widespread violence sparked by desperate populations facing immediate survival pressure. At no point does Ahmed mention the even worse likelihood of multiple nuclear disasters (hundreds!) when infrastructure fails and nuclear plants start popping like firecrackers.

What does Ahmed focus on instead? He promises “cheap, distributed clean energy” (going back up the EROEI slope) and a transition away from industrial agriculture toward relocalization and agroecology. However, these are means of extending population, consumption, and despoliation further into overshoot, not plans for sustainability at a far lower population. Even more worrisome, Ahmed also cites ongoing shifts in information, finance, and ethics, all of which are sociological constructs that have been reified in the modern world. These shifts are strikingly “same, only different” except perhaps the ethics revolution. Ahmed says we’re already witnessing a new ethics arising: “a value system associated with the emerging paradigm is … supremely commensurate with what most of us recognize as ‘good’: love, justice, compassion, generosity.” I just don’t see it yet. Rather, I see continued accumulation of power and wealth among oligarchs and plutocrats, partly through the use of institutionalized force (looking increasingly like mercenaries and henchmen).

Also missing from Ahmed’s salve for our worries is discussion of ecological collapse in the form of climate change and its host of associated causes and effects. At a fundamental level, the biophysical conditions for life on earth are changing from the relative steady state of the last 200,000 years or so that humans have existed, or more broadly, the 65 million years since the last major extinction event. The current rate of change is far too rapid for evolution and culture to adapt. New ways of managing information, economics, and human social structures simply cannot keep up.

All that said, well, sure, let’s get going and do what can be done. I just don’t want to pretend that we’re anywhere close to a new dawn.