Posts Tagged ‘Technophilia’

From the May 2022 issue of Harper’s Magazine, Hari Kunzru’s “Easy Chair” column:

These days, I rarely have to delay the gratification of my cultural desires. I expect them to be met, if not instantly, then with all reasonable speed. I am grumpy to find that some obscure documentary is only available on a streaming service I don’t subscribe to yet. If I want to know the source of a lyric or a line of poetry, I type the words and am annoyed if the answer doesn’t appear right away. My hungry young self would consider me incredibly spoiled.

In most ways I prefer this to how things were, but with the enormous gain in access, something has been lost. Scarcity produced a particularly intense relationship with culture, and gave deep significance to subcultural signals. When you found something you loved, something that had taken time and work to unearth, you clung to it. Often you felt as if it was your secret, your talisman. If you met someone else who liked it, it was both exciting and threatening.

Although I’m not paying much attention to breathless reports about imminent strong AI, the Singularity, and computers already able to “model” human cognition and perform “impressive” feats of creativity (e.g., responding to prompts and creating “artworks” — scare quotes intended), recent news reports that chatbots are harassing, gaslighting, and threatening users just makes me laugh. I’ve never wandered over to that space, don’t know how to connect, and don’t plan to test drive for verification. Isn’t it obvious to users that they’re interacting with a computer? Chatbots are natural-language simulators within computers, right? Why take them seriously (other than perhaps their potential effects on children and those of diminished capacity)? I also find it unsurprising that, if a chatbot is designed to resemble error-prone human cognition/behavior, it would quickly become an asshole, go insane, or both. (Designers accidentally got that aspect right. D’oh!) That trajectory is a perfect embodiment of the race to the bottom of the brain stem (try searching that phrase) that keeps sane observers like me from indulging in caustic online interactions. Hell no, I won’t go.

The conventional demonstration that strong AI has arisen (e.g., Skynet from the Terminator movie franchise) is the Turing test, which is essentially the inability of humans to distinguish between human and computer interactions (not a machine-led extermination campaign) within limited interfaces such as text-based chat (e.g., the dreaded digital assistance that sometimes pops up on websites). Alan Turing came up with the test at the outset of computing era, so the field was arguably not yet mature enough to conceptualize a better test. I’ve always thought the test actually demonstrates the fallibility of human discernment, not the arrival of some fabled ghost in the machine. At present, chatbots may be fooling no one into believing that actual machine intelligence is present on the other side of the conversation, but it’s a fair expectation that further iterations (i.e., ChatBot 1.0, 2.0, 3.0, etc.) will improve. Readers can decide whether that improvement will be progress toward strong AI or merely better ability to fool human interlocutors.

Chatbots gone wild offer philosophical fodder for further inquiry into ebbing humanity as the drive toward trans- and post-human technology continue refining and redefining the dystopian future. What about chatbots make interacting with them hypnotic rather than frivolous — something wise thinkers immediately discard or even avoid? Why are some humans drawn to virtual experience rather than, say, staying rooted in human and animal interactions, our ancestral orientation? The marketplace already rejected (for now) the Google Glass and Facebook’s Meta resoundingly. I haven’t hit upon satisfactory answers to those questions, but my suspicion is that immersion in some vicarious fictions (e.g., novels, TV, and movies) fits well into narrative-styled cognition while other media trigger revulsion as one descends into the so-called Uncanny Valley — an unfamiliar term when I first blogged about it though it has been trending of late.

If readers want a really deep dive into this philosophical area — the dark implications of strong AI and an abiding human desire to embrace and enter false virtual reality — I recommend a lengthy 7-part Web series called “Mere Simulacrity” hosted by Sovereign Nations. The episodes I’ve seen feature James Lindsay and explore secret hermetic religions operating for millennia already alongside recognized religions. The secret cults share with tech companies two principal objectives: (1) simulation and/or falsification of reality and (2) desire to transform and/or reveal humans as gods (i.e., ability to create life). It’s pretty terrifying stuff, rather heady, and I can’t provide a reasonable summary. However, one takeaway is that by messing with both human nature and risking uncontrollable downstream effects, technologists are summoning the devil.

/rant on

The previous time I was prompted to blog under this title was regarding the deplorable state of public education in the U.S., handily summarized at Gin and Tacos (formerly on my blogroll). The blogger there is admirable in many respects, but he has turned his attention away from blogging toward podcasting and professional writing with the ambition of becoming a political pundit. (I have disclaimed any desire on my part to be a pundit. Gawd … kill me first.) I check in at Gin and Tacos rarely anymore, politics not really being my focus. However, going back to reread the linked blog post, his excoriation of U.S. public education holds up. Systemic rot has since graduated into institutions of higher learning. Their mission statements, crafted in fine, unvarying academese, may exhibit unchanged idealism but the open secret is that the academy has become a network of brainwashing centers for vulnerable young adults. See this blog post on that subject. What prompts this new reality check is the ongoing buildup of truly awful news, but especially James Howard Kunstler’s recent blog post “The Four Fuckeries” over at Clusterfuck Nation, published somewhat in advance of his annual year-end-summary-and-predictions post. Kunstler pulls no punches, delivering assessments of activities in the public interest that have gone so abysmally wrong it beggars the imagination. I won’t summarize; go read for yourself.

At some point, I realized when linking to my own past blog posts that perhaps too many include the word wrong in the title. By that, I don’t mean merely incorrect or bad or unfortunate but rather purpose-built for comprehensive damage that mere incompetence could not accomplish or explain. Some may believe the severity of damage is the simple product of lies compounding lies, coverups compounding coverups, and crimes compounding crimes. That may well be true in part. But there is far too much evidence of Manichean manipulation and heedless damn-the-torpedoes-full-steam-ahead garbage decision-making to waive off widespread institutional corruptions as mere conspiracy. Thus, Kunstler’s choice of the term fuckeries. Having already reviewed the unmitigated disaster of public education, let me instead turn to other examples.

(more…)

The difference between right and wrong is obvious to almost everyone by the end of kindergarten. Temptations persist and everyone does things great and small known to be wrong when enticements and advantages outweigh punishments. C’mon, you know you do it. I do, too. Only at the conclusion of a law degree or the start of a political career (funny how those two often coincide) do things get particularly fuzzy. One might add military service to those exceptions except that servicemen are trained not to think, simply do (i.e., follow orders without question). Anyone with functioning ethics and morality also recognizes that in legitimate cases of things getting unavoidably fuzzy in a hypercomplex world, the dividing line often can’t be established clearly. Thus, venturing into the wide, gray, middle area is really a signal that one has probably already gone too far. And yet, demonstrating that human society has not really progressed ethically despite considerable gains in technical prowess, egregiously wrong things are getting done anyway.

The whopper of which nearly everyone is guilty (thus, guilty pleasure) is … the Whopper. C’mon, you know you eat it do it. I know I do. Of course, the irresistible and ubiquitous fast food burger is really only one example of a wide array of foodstuffs known to be unhealthy, cause obesity, and pose long-term health problems. Doesn’t help that, just like Big Tobacco, the food industry knowingly refines their products (processed foods, anyway) to be hyperstimuli impossible to ignore or resist unless one is iron willed or develops an eating disorder. Another hyperstimulus most can’t escape is the smartphone (or a host of other electronic gadgets). C’mon, you know you crave the digital pacifier. I don’t, having managed to avoid that particular trap. For me, electronics are always only tools. However, railing against them with respect to how they distort cognition (as I have) convinces exactly no one, so that argument goes on the deferral pile.

Another giant example not in terms of participation but in terms of effect is the capitalist urge to gather to oneself as much filthy lucre as possible only to sit heartlessly on top of that nasty dragon’s hoard while others suffer in plain sight all around. C’mon, you know you would do it if you could. I know I would — at least up to a point. Periods of gross inequality come and go over the course of history. I won’t make direct comparisons between today and any one of several prior Gilded Ages in the U.S., but it’s no secret that the existence today of several hundy billionaires and an increasing number of mere multibillionaires represents a gross misallocation of financial resources: funneling the productivity of the masses (and fiat dollars whiffed into existence with keystrokes) into the hands of a few. Fake philanthropy to launder reputations fail to convince me that such folks are anything other than miserly Scrooges fixated on maintaining and growing their absurd wealth, influence, and bogus social status at the cost of their very souls. Seriously, who besides sycophants and climbers would want to even be in the same room as one of those people (names withheld)? Maybe better not to answer that question.

(more…)

Cynics knew it was inevitable: weaponized drones and robots. Axon Enterprises, Inc., maker of police weaponry (euphemistically termed “public safety technologies”), announced its development of taser equipped drones presumed capable of neutralizing an active shooter inside of 60 seconds. Who knows what sorts of operating parameters restrict their functions or if they can be made invulnerable to hacking or disallowed use as offensive weapons?

A sane, civilized society would recognize that, despite bogus memes about an armed society being a polite society, the prospect of everyone being strapped (like the fabled Old American West) and public spaces (schools, churches, post offices, laundromats, etc.) each being outfitted with neutralizing technologies is fixing the wrong problem. But we are no longer a sane society (begging the question whether we ever were). So let me suggest something radical yet obvious: the problem is not technological, it’s cultural. The modern world has made no progress with respect to indifference toward the suffering of others. Dehumanizing attitudes and technologies are no longer, well, medieval, but they’re no less cruel. For instance, people are not put in public stocks or drawn and quartered anymore, but they are shamed, cancelled, tortured, terrorized, propagandized, and abandoned in other ways that allow maniacs to pretend to others and to themselves that they are part of the solution. Hard to believe that one could now feel nostalgia for the days when, in the aftermath of yet another mass shooting, calls for gun control were met with inaction (other then empty rhetoric) rather than escalation.

The problem with diagnosing the problem as cultural is that no one is in control. Like water, culture goes where it goes and apparently sinks to its lowest ebb. Attempts to channel, direct, and uplift culture might work on a small scale, but at the level of society — and with distorted incentives freedom is certain to deliver — malefactors are guaranteed to appear. Indeed, anything that contributes to the arms race (now tiny, remote-controlled, networked killing devices rather than giant atomic/nuclear ones) only invites greater harm and is not a solution. Those maniacs (social and technical engineers promising safety) have the wrong things wrong.

Small, insular societies with strict internal codes of conduct may have figured out something that large, free societies have not, namely, that mutual respect, knowable communities, and repudiation of advanced technologies give individuals something and someone to care about, a place to belong, and things to do. When the entire world is thrown open, such as with social media, populations become atomized and anonymized, unable to position or understand themselves within a meaningful social context. Anomie and nihilism are often the rotten fruit. Splintered family units, erosion of community involvement, and dysfunctional institutions add to the rot. Those symptoms of cultural collapse need to be addressed even if they are among the most difficult wrong things to get right.

Following up the two previous entries in this series, the Feb. 2022 issue of Scientific American has a cover article by Adam Becker called “The Origins of Space and Time” with the additional teaser “Does spacetime emerge from a more fundamental reality?” (Oddly, the online title is different, making it awkward to find.) I don’t normally read Scientific American, which has become a bit like Time and Newsweek in its blurb-laden, graphics-heavy presentation intended patronizingly for general-interest readers. In fact, I’ll quote the pullout (w/o graphics) that summarizes the article:

How Spacetime Emerges. Space and time are traditionally thought of as the backdrop to the universe. But new research suggests they might not be fundamental; instead spacetime could be an emergent property of a more basic reality, the true backdrop to the cosmos. This idea comes from two theories that attempt to bridge the divide between general relativity and quantum mechanics. The first, string theory, recasts subatomic particles as tiny loops of vibrating string. The second, loop quantum gravity, envisions spacetime being broke down into chunks — discrete bits that combine to create a seemingly smooth continuum.

Being a layperson in such matters, I’ll admit openly that I don’t fully grasp the information presented. Indeed, every breathless announcement from CERN (or elsewhere) about a new subatomic particle discovery or some research group’s new conjectures into quantum this-or-that I typically greet passively at best. Were I a physicist or cosmologist, my interest would no doubt be more acute, but these topics are so far removed from everyday life they essentially become arcane inquiries into the number of angels dancing on the head of a pin. I don’t feel strongly enough to muster denunciation, but discussion of another aspect of pocket reality is worth some effort.

My understanding is that the “more basic reality, the true backdrop” discussed in the article is multidimensionality, something Eric Weinstein has also been grappling with under the name Geometric Unity. (Bizarrely, Alex Jones has also raved about interdimensional beings.) If the universe indeed has several more undetectable dimensions (as memory serves, Weinstein says as many as 14) and human reality is limited to only a few, potentially breaking through to other dimension and/or escaping boundaries of a mere four is tantalizing yet terrifying. Science fiction often explores these topics, usually in the context of space travel and human colonization of the galaxy. As thought experiments, fictional stories can be authentically entertaining and enjoyable. Within nonfiction reality, desire to escape off-world or into extra- or interdimensionality is an expression of desperation considering just how badly humans have fucked up the biosphere and guaranteed an early extinction for most species (including ours). I also chafe at the notion that this world, this reality, is not enough and that pressing forward like some unstoppable chemical reaction or biological infiltration is the obvious next step.

Let me first restate axioms developed in previous blog posts. Narrative is the essential outward form of consciousness. Cognition has many preverbal and nonverbal subtleties, but the exchange of ideas occurs predominantly through narrative, and the story of self (told to oneself) can be understood as stream of consciousness: ongoing self-narration of sensations and events. The principal characteristic of narrative, at least that which is not pure fantasy, is in-the-moment sufficiency. Snap-judgment heuristics are merely temporary placeholders until, ideally at least, thoughtful reconsideration and revision that take time and discernment can be brought to bear. Stories we tell and are told, however, often do not reflect reality well, partly because our perceptual apparatuses are flawed, partly because individuals are untrained and unskilled in critical thinking (or overtrained and distorted), and partly because stories are polluted with emotions that make clear assessments impossible (to say nothing of malefactors with agendas). Some of us struggle to remove confabulation from narrative (as best we can) whereas others embrace it because it’s emotionally gratifying.

A good example of the reality principle is recognition, similar to the 1970s energy crisis, that energy supplies don’t magically appear by simply digging and drilling more of the stuff out of the ground. Those easy-to-get resources have been plundered already. The term peak oil refers to eventual decline in energy production (harvesting, really) when the easy stuff is more than half gone and undiminished (read: increasing) demand impels energy companies to go in search of more exotic supply (e.g., underwater or embedded in shale). If that reality is dissatisfying, a host of dreamt-up stories offer us deliverance from inevitable decline and reduction of lifestyle prerogatives by positing extravagant resources in renewables, hydrogen fuel cells, fusion (not to be confused with fission), or as-yet unexploited regions such as The Arctic National Wildlife Refuge. None of these represent plausible realities (except going into heretofore protected regions and bringing ecological devastation).

The relationship of fictional stories to reality is quite complex. For this blog post, a radically narrow description is that fiction is the imaginary space whereas ideas can be tried out and explored safely in preparation for implementation in reality. Science fiction (i.e., imagining interstellar space travel despite its flat impossibility in Newtonian physics) is a good example. Some believe humans can eventually accomplish what’s depicted in sci-fi, and in certain limited examples we already have. But many sci-fi stories simply don’t present a plausible reality. Taken as vicarious entertainment, they’re AOK superfine with me. But given that Western cultures (I can’t opine on cultures outside the West) have veered dangerously into rank ideation and believing their own hype, too many people believe fervently in aspirational futures that have no hope of ever instantiating. Just like giant pools of oil hidden under the Rocky Mountains (to cite something sent to me just today offering illusory relief from skyrocketing gasoline prices).

Among the many genres of narrative now on offer in fiction, no better example of sought-after-power is the superhero story. Identifying with the technological and financial power of Ironman and Batman or the god-powers of Thor and Wonder Woman is thrilling, perhaps, but again, these are not plausible realities. Yet these superrich, superstrong, superintelligent superheros are everywhere in fiction, attesting to liminal awareness of lack of power and indeed frailty. Many superhero stories are couched as coming-of-age stories for girls, who with grit and determination can fight toe-to-toe with any man and dominate. (Too many BS examples to cite.) Helps, of course, if the girl has magic at her disposal. Gawd, do I tire of these stories, told as origins in suffering, acquisition of skills, and coming into one’s own with the mature ability to force one’s will on others, often in the form of straight-up killing and assassination. Judge, jury, and executioner all rolled into one but entirely acceptable vigilantism if done wearing a supersuit and claiming spurious, self-appointed moral authority.

There are better narratives that don’t conflate power with force or lack plausibility in the world we actually inhabit. In a rather complicated article by Adam Tooze entitled “John Mearsheimer and the Dark Origins of Realism” at The New Statesman, after a lengthy historical and geopolitical analysis of competing narratives, a mode of apprehending reality is described:

… adopting a realistic approach towards the world does not consist in always reaching for a well-worn toolkit of timeless verities, nor does it consist in affecting a hard-boiled attitude so as to inoculate oneself forever against liberal enthusiasm. Realism, taken seriously, entails a never-ending cognitive and emotional challenge. It involves a minute-by-minute struggle to understand a complex and constantly evolving world, in which we are ourselves immersed, a world that we can, to a degree, influence and change, but which constantly challenges our categories and the definitions of our interests. And in that struggle for realism – the never-ending task of sensibly defining interests and pursuing them as best we can – to resort to war, by any side, should be acknowledged for what it is. It should not be normalised as the logical and obvious reaction to given circumstances, but recognised as a radical and perilous act, fraught with moral consequences. Any thinker or politician too callous or shallow to face that stark reality, should be judged accordingly.

What if everyone showed up to an event with an expectation of using all their tech and gadgets to facilitate the group objective only to discover that nothing worked? You go to a fireworks display but the fireworks won’t ignite. You go to a concert but the instruments and voices make no sound. You go to a sporting event but none of the athletes can move. Does everyone just go home? Come back another day to try again? Or better yet, you climb into your car to go somewhere but it won’t start. Does everyone just stay home and the event never happens?

Those questions relate to a new “soft” weapons system called Scorpius (no link). The device or devices are said to disrupt and disable enemy tech by issuing a narrowly focused electromagnetic beam. (Gawd, just call it a raygun or phaser. No embarrassment over on-the-nose naming of other things.) Does the beam fry the electronics of its target, like a targeted Carrington event, or just simply scramble the signals, making the tech inoperable? Can tech be hardened against attack, such as being encased in a Faraday cage? Wouldn’t such isolation itself make tech nonfunctional, since electronic communications between locations is the essence of modern devices, especially for targeting and telemetry? These are a few more idle questions (unanswered, since announcements of new weaponry I consulted read like advertising copy) about this latest advance (euphemism alert) in the arms race. Calling a device that can knock a plane (um, warplane) out of the sky (crashing somewhere, obviously) “soft protection” because the mechanism is a beam rather than a missile rather obfuscates the point. Sure, ground-based technologies might be potentially disabled without damage, but would that require continuous beam-based defense?

I recall an old Star Trek episode, the one with the Gorn, where omnipotent aliens disabled all weapons systems of two spaceships postured for battle by superheating controls, making them too hot to handle. Guess no one thought of oven mitts or pencils to push the “Fire!” buttons. (Audiences were meant to think, considering Star Trek was a thinking person’s entertainment, but not too much.) Instead of mass carnage, the two captains were transported to the surface of a nearby planet to battle by proxy (human vs. reptile). In quintessential Star Trek fashion — imagining a hopeful future despite militaristic trappings — the human captain won not only the physical battle but the moral battle (with self) by refusing to dispatch the reptile captain after he/it was disabled. The episode posed interesting questions so long as no one searched in the weeds for plausibility.

We’re confronted now, and again, with many of these same questions, some technical, some strategic, but more importantly, others moral and ethical. Thousands of years of (human) history have already demonstrated the folly of war (but don’t forget profitability). It’s a perennial problem, and from my vantage point, combatants on all sides are no closer to Trekkie moral victory now than in the 1960s. For instance, the U.S. and its allies are responsible for millions of deaths in Iraq, Afghanistan, Syria, and elsewhere just in the last two decades. Go back further in time and imperial designs look more and more like sustained extermination campaigns. But hey, we came to play, and any strategic advantage must be developed and exploited, moral quandaries notwithstanding.

It’s worth pointing out that in the Gorn episode, the captains were deprived of their weapons and resorted to brute force before the human improvised a projectile weapon out of materials handily strewn about, suggesting perhaps that intelligence is the most deadly weapon. Turns out to have been just another arms race.

Continuing from part 2. I’m so slow ….

If cognitive inertia (i.e., fear of change) used to manifest as technophobia, myriad examples demonstrate how technology has fundamentally penetrated the social fabric and shared mental space, essentially flipping the script to fear of missing out (FOMO) of whatever latest, greatest innovation comes down the pike (laden with fraud and deception — caveat emptor). With FOMO, a new phobia has emerged: fear of technological loss, or more specifically, inability to connect to the Internet. This is true especially among the young, born and bred after the onset of the computing and digital communications era. Who knows when, why, or how loss of connectivity might occur? Maybe a Carrrington Event, maybe rolling blackouts due to wildfires (such as those in California and Oregon), maybe a ransomware attack on ISPs, or maybe a totalitarian clampdown by an overweening government after martial law is declared (coming soon to a neighborhood near you!). Or maybe something simpler: infrastructure failure. For some, inability to connect digitally, electronically, is tantamount to total isolation. Being cut off from the thoughts of others and abandoned left to one’s own thoughts, even on the short term, is thus roughly equivalent to the torture of solitary confinement. Forget the notion of digital detox.

/rant on

Cheerleaders for technocracy are legion, of course, while the mind boggles at how society might or necessarily will be organized differently when it all fails (as it must, if for no other reason than energy depletion). Among the bounties of the communications era is a surfeit of entertainments, movies and TV shows especially, that are essentially new stories to replace or supplant old stories. It’s no accident, however, that the new ones come wrapped up in the themes, iconography, and human psychology (is there any other kind, really?) of old ones. Basically, everything old is new again. And because new stories are delivered through hyperpalatable media — relatively cheap, on demand, and removed from face-to-face social contexts — they arguably cause as much disorientation as reorientation. See, for instance, the embedded video, which is rather long and rambling but nevertheless gets at how religious instincts manifest differently throughout the ages and are now embedded in comic book stories and superheros that have overtaken the entertainment landscape.

Mention is made that the secular age coincides roughly with the rise of video stores, a form of on-demand selection of content more recently made even simpler with ubiquitous streaming services. Did people really start hunkering down in their living rooms, eschewing group entertainments and civic involvements only in the 1980s? The extreme lateness of that development in Western history is highly suspect, considering the death of god had been declared back in the middle of the 19th century. Moreover, the argument swings around to the religious instinct, a cry or meaning if you will, being blocked by organized churches and their endemic corruption and instead finding expression in so-called secular religions (oxymoron alert). Gawd, how I tire of everything that functions as psychological grounding being called a religion. Listen, pseudo-religious elements can be found in Cheerios if one twists up one’s mind sufficiently. That doesn’t make General Mills or Kellogg’s new secular-religious prophets.

Back to the main point. Like money grubbing, technophilia might quiet the desperate search for meaning temporarily, since there’s always more of both to acquire. Can’t get enough, ever. But after even partial acquisition, the soul feels strangely dissatisfied and disquieted. Empty, one might even say. So out roving into the public sphere one goes, seeking and pursuing something to fill one’s time and appetites. Curiously, many traditional solutions to this age-old problem taught the seeker to probe within as an alternative. Well screw that! In the hyper-connected 20th-century world, who has time for that measly self-isolation? More reified Cheerios!

/rant off

Continuing from part 1.

So here’s the dilemma: knowing a little bit about media theory and how the medium shapes the message, I’m spectacularly unconvinced that the cheerleaders are correct and that an entirely new mediascape (a word I thought maybe I had just made up, but alas, no) promises offers to correct the flaws of the older, inherited mediascape. It’s clearly not journalists leading the charge. Rather, comedians, gadflies, and a few academics (behaving as public intellectuals) command disproportionate attention among the digital chattering classes as regular folks seek entertainment and stimulation superior to the modal TikTok video. No doubt a significant number of news junkies still dote on their favorite journalists, but almost no journalist has escaped self-imposed limitations of the chosen media to offer serious reporting. Rather, they offer “commentary” and half-assed observations on human nature (much like like comedians who believe themselves especially insightful — armchair social critics like me probably fit that bill, too). If the sheer count of aggregate followers and subscribers across social media platforms is any indication (it isn’t …), athletes, musicians (mostly teenyboppers and former pop tarts, as I call them), and the irritatingly ubiquitous Kardashian/Jenner clan are the most influential, especially among Millennials and Gen Z, whose tastes skew toward the frivolous. Good luck getting insightful analysis out of those folks. Maybe in time they’ll mature into thoughtful, engaged citizens. After all, Kim Kardashian apparently completed a law degree (but has yet to pass the bar). Don’t quite know what to think of her three failed marriages (so far). Actually, I try not to.

I’ve heard arguments that the public is voting with its attention and financial support for new media and increasingly disregarding the so-called prestige media (no such thing anymore, though legacy media is still acceptable). That may well be, but it seems vaguely ungrateful for established journalists and comedians, having enjoyed the opportunity to apprentice under seasoned professionals, to take acquired skills to emerging platforms. Good information gathering and shaping — even for jokes — doesn’t happen in a vacuum, and responsible journalism in particular can’t simply be repackaging information gathered by others (i.e., Reuters, the Associated Press, and Al Jezeera) with the aforementioned “commentary.” A frequent reason cited for jumping ship is the desire to escape editorial control and institutional attempts to distort the news itself according to some corporate agenda or ideology. Just maybe new platforms have made that possible in a serious way. However, the related desire to take a larger portion of the financial reward for one’s own work (typically as celebrities seeking to extend their 15 minutes of fame — ugh) is a surefire way to introduce subtle, new biases and distortions. The plethora of metrics available online, for instance, allows content creators to see what “hits” or goes viral, inviting service to public interest that is decidedly less than wholesome (like so much rubbernecking).

It’s also curious that, despite all the talk about engaging with one’s audience, new media is mired in broadcast mode, meaning that most content is presented to be read or heard or viewed with minimal or no audience participation. It’s all telling, and because comments sections quickly run off the rails, successful media personalities ignore them wholesale. One weird feature some have adopted during livestreams is to display viewer donations accompanied by brief comments and questions, the donation being a means of separating and promoting one’s question to the top of an otherwise undifferentiated heap. To my knowledge, none has yet tried the established talk radio gambit of taking live telephone calls, giving the public a chance to make a few (unpurchased) remarks before the host resumes control. Though I’ve never been invited (an invitation is required) and would likely decline to participate, the Clubhouse smartphone app appears to offer regular folks a venue to discuss and debate topics of the day. However, reports on the platform dynamics suggest that the number of eager participants quickly rises to an impossible number for realistic group discussion (the classroom, or better yet, graduate seminar establishes better limitations). A workable moderation mechanism has yet to emerge. Instead, participants must “raise their hand” to be called upon to speak (i.e., be unmuted) and can be kicked out of the “room” arbitrarily if the moderator(s) so decide. This is decidedly not how conversation flows face-to-face.

What strikes me is that while different broadcast modes target and/or capture different demographics, they all still package organize content around the same principle: purporting to have obtained information and expertise to be shared with or taught to audiences. Whether subject matter is news, science, psychology, comedy, politics, etc., they have something ostensibly worth telling you (and me), hopefully while enhancing fame, fortune, and influence. So it frankly doesn’t matter that much whether the package is a 3-minute news segment, a brief celebrity interview on a late night talk show, an article published in print or online, a blog post, a YouTube video of varying duration, a private subscription to a Discord Server, a Subreddit, or an Instagram or Twitter feed; they are all lures for one’s attention. Long-form conversations hosted by Jordan Peterson, Joe Rogan, and Lex Fridman break out of self-imposed time limitations of the typical news segment and flow more naturally, but they also meander and get seriously overlong for anyone but long-haul truckers. (How many times have I tuned out partway into Paul VanderKlay’s podcast commentary or given up on on Matt Taibbi’s SubStack (tl;dr)? Yeah, lost count.) Yet these folks enthusiastically embrace the shifting mediascape. The digital communications era is already mature enough that several generations of platforms have come and gone as well-developed media are eventually coopted or turned commercial and innovators drive out weaker competitors. Remember MySpace, Google Plus, or American Online? The list of defunct social media is actually quite long. Because public attention is a perpetually moving target, I’m confident that those now enjoying their moment in the sun will face new challenges until it all eventually goes away amidst societal collapse. What then?