Category: Technology

The New Internet is Forced Cannibalization

The saddest part of the internet right now, to me, is not necessarily its being smothered in algorithms that drive what we pay attention to all day everyday. That is a tragedy and something that we need to deal with or perish (mentally, anyway).

What I am most sad about when I look at the state of the internet is that its identity has fundamentally changed. When I was younger (probably even younger than a teenager) the internet was this wild frontier. Anything went, but you had to know where to find stuff. Links led to other links, and on and on down the rabbit hole.

Was it perfect back in the blogosphere days, or earlier? No. Take Alan Jacobs’s post today.

When Grandpa wrote against the blogosphere, that kind of site is what he had in mind: a constant stream of hot takes, some of which had to be walked back later because they were offered before, and instead of, reflective consideration. You’d therefore have a better sense of what I meant in that much-quoted line if you replaced “blogosphere” with “Twitter.”

blogging and the blogosphere”

Note — the blogosphere, at one point, was not great. It was the equivalent of today’s Twitter, a stream of non-reflective takes everyday. It probably brought on the constant news cycle that we all pretty much hate now.

But! There was also some beauty there, and this is what Alan gets at in the meat of this post. Blogging can be that rabbit-hole, linky version of the internet that many of us grew up with. More from the post:

I post a thought; later, I return to it with an update; someone responds and I incorporate their thoughts into a new post that links to them and to the original – basically, what I am doing right now. Note also that blogging, when done in this fashion and in this spirit, is also seriously dialogical, and I think there is a close connection between a dialogue-friendly medium and a forgiving medium. 

The incorporation, the back-and-forth, the dialogue is what makes blogging beautiful. And it’s what made the non-blog part of the internet before 2008 so fun too. We weren’t being force fed new content all the time from what essentially amounts to non-democratic, institutionalized, whitewashed, walled gardens.

Think about it. Where do you go when you get on the internet? You go to one of, I’m guessing 5-10 websites. Facebook/Instagram, Twitter, Youtube, Reddit, a news source……. can you even think of anymore?

It reminds me of that scene in Snowpiercer (SPOILER ALERT) where the people on the back of the train are given those gross gel bar things to eat, only to find out that those are made of their own dead. All the infighting and vitriol that happens on barely a handful of websites is essentially forced cannibalization while that handful of corporations makes billions of dollars off of our hatred.

That’s just sad to me. I miss the old internet.

Update: As a coda to this post, I’m gonna do what AJ suggests, and link to the Robin Sloan post he mentions. Here’s Robin:

One is that I want to say again: the High Blogging Era might be behind us, but there is still blogging to be done, and it is so easy and so rewarding to dip a toe in, start to follow a few of these feeds, and experience a different kind of network.

“Many Subtle Channels”

It strikes me, too, that in its purest form, blogging is just a sheer beautiful way to write and engage with the world. I accessed all of these words freely. It cost me nothing (besides a machine and an internet connection) to see these words, to think about them and what they have to say about technology and connection and culture. What a delight the internet was… perhaps what a delight it could still be.

Digital-Political-Informational Silos

A while back, I was reading a book called Fall by Neal Stephenson. Full disclosure — I haven’t finished this book. It’s long, and I found that the focus of the book shifted away from what I was really found interesting in the book. That’s probably a mistake, but it didn’t hold my attention about halfway through. This post, however, is about the part of the book that I did find interesting.

It’s kind of a strange, sprawling, not-so-distant future sci-fi. Its focus is split between (roughly) two distinct stories. In the main story, an older man named Dodge dies, and his brain/consciousness is mapped and uploaded onto a server. Many years later, a young programmer figures out how to “wake up” Dodge’s consciousness. This is the part of the book that I lost interest in around the midway point.

The second focus of the book is on the outside world, and the various ways in which humans are dealing with (mis)information, false media narratives, news bias, and social media silos. I really wanted the book to be about this. As an example, near the beginning of the book, a fake nuclear attack on a small town outside of Moab, Utah was staged via a widespread media blast coordinated with an internet and phone line shutdown for that section of the US. There were well-made press conferences, fake nuclear blast videos that looked like they came from airplanes, and widespread uncertainty throughout everyone’s social media and video feeds.

In other words, something that could EASILY happen in a situation like ours. Not-so-distant future, remember? Some of the characters end up figuring out that the hoax is happening (one of these characters is a CTO of a major social media company, so he feels the responsibility to figure out the real story to stop the spread of the misinformation campaign). Suffice it to say, this one moment holds massive impact on the US over the course of the next couple of decades. There are “Moab truthers” (people who believe a nuclear attack happened, when it did not). A new form of the internet is born where people are given private internet IDs so they can be easily identified.

The story skips ahead, and there has been some kind of secession in the US — there is now the regular USA, and what looks to be a sovereign country called “Ameristan.” Further, you find that everyone has a different, self-selected version of their access to the net — including news channels, articles, entertainment, etc. Some people just have algorithms choosing their information feeds for them. Others (the wealthy) tend to have personal news/media curators peeling through news and media and selecting it to go in the person’s net feed or not. These leads to massive polarization, because these feeds bend and shape how each person sees the world — what it is, what it’s not, who’s in control, and what is true of the past and present.


The thing that struck me the most about this depiction is not only its relevance, but just how close this is to our own reality.

I’m not necessarily worried about secession  or a civil war right now. But this whole notion of people splitting their feeds, breaking off into other forms of social media so they aren’t censored, and choosing the kinds of information they get (or being radicalized by information chosen for them) seems especially… prescient right now.

Since election day last week, I have seen multiple people claim that they are getting off of Facebook (not necessarily a bad thing, IMO). But instead of doing that to create a sense of calm or peace, they are simply moving platforms to a place where their information is not censored — Mastodon, Parler, Gab, etc. I don’t know if this is necessarily a bad thing, but doesn’t this come with a cost that we have a difficult time foreseeing? Facebook, because of its massive popularity, is a platform that pretty much everyone can use. Therefore, we have a bigger opportunity to encounter those with differing view points.

Don’t get me wrong. Facebook and Twitter have their own (MAJOR) problems. Their algorithms select for the most outrageous posts, videos, and images. When, as Tristan Harris says, the economics of the social media companies we use revolve around selling our attention, we will inevitably lose any autonomy we might have over what we consume — and ultimately what we think and how we see the world.

But when we start siloing ourselves within a particular social network with only those who agree with us, doesn’t this present an opportunity for radicalism to spread unchecked in these groups? A form of group-think has the ability to emerge, and this can also produce cult-like behavior, where real, lasting harm can be done to individuals.

I don’t know what the proper solution here. Neither do the tech whistleblowers that we see in movies like The Social Dilemma. They see the problem, but (again, like Tristan Harris says) it’s like saying climate change is a problem. That’s true, but any possible solutions here are necessarily complex, because the entire internet, and therefore the entire digital economy, is inexorably interwoven with the attention economy as seen in the behemoth tech companies of Google/Facebook/Amazon/Apple/Twitter.

Information without Meaning

In the Information Age (the one in which we are living now), it’s really easy to assume that more information is always better. More information means being more informed, which should theoretically make us better citizens, better friends, better human beings. It should lead to increased knowledge, and to having a more coherent picture of reality.

As the amount of information available to us grows every moment, however, I think it’s safe to say that access to more information has not led to these outcomes. More information, somehow, makes us feel less informed. It also seems to lead to less coherent and cohesive understandings of what the world is like, and what it should be like.

Neil Postman makes this argument in Technopoly:

Information has become a form of garbage, not only incapable of answering the most fundamental human questions, but barely useful in providing coherent direction to the solution of even mundane problems. To say it still another way: The milieu in which Technopoly flourishes is one in which the tie between information and human purpose has been severed, i.e., information appears indiscriminately, directed at no one in particular, in enormous volume and at high speeds, and disconnected from theory, meaning, or purpose (69-70).

In other words, we live in an age where the overarching cultural assumption is that more information leads to progress — scientific progress, human progress, economic progress, etc. In fact, the opposite has occurred. The glut of information that overwhelms our senses on a day to day basis leads us to question whether we know anything at all. And because of that, it leads to a lack of a unified theory about what human beings are and what human beings are meant to be.

In this kind of a situation, information becomes its own end, and a not a means to some other end, which it ought to be. Postman again:

To the question, “What problem does the information solve?” the answer is usually “How to generate, store, and distribute more information, more conveniently, at greater speeds than ever before.” This is the elevation of information to a metaphysical status: information as both the means and end of human creativity. In Technopoly, we are driven to fill our lives with the quest to access information. For what purpose  or with what limitations, it is not for us to ask (61, emphasis added).

When reading this yesterday, my first thought was that, in some ways, the way information seems to act of its own accord in society to grow for its own sake is similar to the way capital (money) acts of its own accord within capitalism. Within capitalism, money always optimizes for the growth of money. Within what Postman calls “technopoly,” information optimizes for its own growth.

Without some overarching system in place that allows us to set information or money up as a means to some actual end, both of these become devourers of our time, attention, and ultimately our lives.

Twitter and the Shape of Our Knowledge

From Yascha Mounk’s piece at The Atlantic, “The Problem Isn’t Twitter. It’s That You Care About Twitter.”:

Being active on Twitter has practically become part of the job description for some of the most influential people in the country. Any politician, journalist, or CEO who does not engage with social media gives up a precious chance to shape the conversation. And any public or semipublic figure who fails to monitor what is happening on the platform risks missing attacks or accusations that can quickly find their way into the headlines of national newspapers and the chyrons of cable-news shows.

Obligation breeds habit and habit addiction. The most active Twitter users I know check the platform as soon as they wake up to see what they missed. Throughout the day, they seize on the little interstices of time they have available to them—on the way to work, or in between meetings—to follow each new development in that day’s controversies. Even in the evening, when they are settling down to dinner, they cheer attacks against their enemies, or quietly fume over the mean tweet some anonymous user sent their way. Minutes before they finally drift off to sleep, they check their notifications one last time.

I’ve been off Twitter for a while now. My posts still go to a Twitter account, @cdbaca, but I do not have access to the username and password, because I know the dangers of Twitter for my own personal well-being. But I’m not here to toot my own horn about my digital habits. I have enough other bad habits that prove I am no internet saint.

This piece at The Atlantic made me think of Neil Postman’s claim that new technologies bear new epistemologies. In other words, the technology that we use make us all think differently about two things: (1) what we can know and (2) how we know those things. In Technopoly, he writes,

new technologies alter the structure of our interests: the things we think about. They alter the character of our symbols: the things we think with. And they alter the nature of community: the arena in which thoughts develop. (20)

Postman was no technophobe — he’s just relatively hesitant about the uncritical use of new technology that’s so prevalent in our society. We should be wary, in other words, of uncritical engagement with technology, because the use of technology often (always?) comes with its own way of framing how we picture the world. The same is true for language, which is maybe the postmodern insight.

Twitter is really interesting in this regard, and I think — I hope — that some of us are coming to our senses about the way that heavy Twitter use forms our sense of what we can know and how we know it. Limiting ourselves to short, pithy sentences that attempt to convey religious, political, philosophical, or existential meaning will absolutely have an effect on how we view those spheres of human life.

And ultimately, I wonder if that means that we ought to extend Postman’s thought about the effects of new technology. New technologies don’t just bear new epistemologies; after we accept that new epistemology (or framework of knowledge), we are led towards a new metaphysics (what reality really is), and ultimately a new way of understanding values (aesthetics and ethics).

What the Internet is Like

This lecture from Patricia Lockwood (“The Communal Mind”) is a little strange, a little terrifying, and distinctly captures what the internet felt like to me for those years I was on Twitter and Facebook. A quote, though there are several parts of this worth reading/listening to:

Each day we merged into a single eye that scanned a single piece of writing. The hot reading did not just pour from her but flowed all around her; her concreteness almost impeded it, as if she were a mote in the communal sight. Sometimes the pieces addressed the highest topics: war, poverty, epidemics. At other times they were about going to a deli with a poor friend who was intimidated by the fancy ham. And we always called it that: a piece, a piece, a piece.

Did you read the piece?
It’s there in the piece.
Did you even read the piece?
Um, I wrote the piece.

There’s something Faulkner-esque in the lecture itself, in that Lockwood attempts to capture the actual feeling of being on the internet in language. Not for the faint of heart, and admittedly a little strange. It only bolstered my desire to gain less knowledge, not more.

The Bottom of Things

Email is a wonderful thing for people whose role in life is to be on top of things. But not for me; my role is to be on the bottom of things. What I do takes long hours of studying and uninterruptible concentration. I try to learn certain areas of computer science exhaustively; then I try to digest that knowledge into a form that is accessible to people who don’t have time for such study.

From professor Donald Knuth’s Stanford web page (H/T Cal Newport, “Is Email Making Professors Stupid?”)

It should be said that this probably shouldn’t only apply to computer science professors, but to anyone interested in doing the deep work of research, understanding, and making that research digestible for others.

Gain Knowledge. Not Too Much. Mostly from Books.

In a world where we consume and regurgitate information on an almost endless basis, it would be prescient for us to think of our information consumption in terms of diet.

What do we know about healthy eating right now? Basically, good consumption habits boil down to one simple rule for most people: “Eat food. Not too much. Mostly plants.” Of course, there are acceptable variations on this rule for human flourishing, but the simplicity and truth of Michael Pollan’s statement stands. Too much of any non-plant-based food is generally bad for us. And we know that sugar (especially refined sugar that is added to food) is particularly bad for us.

I’ve been thinking about what the informational or educational equivalent of Michael Pollan’s rule above would be. Perhaps: “Gain knowledge. Not too much. Mostly from books.” Right now, this is not the standard rule for most people. We don’t really need to be convinced at the moment that knowledge is a good thing, so I don’t think I need to defend “Gain knowledge” here. Humans are knowledge-amassing creatures by nature.

The second sentence presents a bit more of a problem. “Not too much.” Really? Is there such a thing as “too much” knowledge? I think the answer is likely “yes.” We live in the age of information. Much like the fact that most Westerners have access to a nearly limitless amount of food, we also have access to (what feels like) an infinite amount of information. How many of us spend our time standing in the “stream,” (see Mike Caulfield’s distinctiong between the garden/stream metaphors when we think about the internet) consuming text, images, and video at a rate that prevents us from comprehending that which we consume? It stands to reason that access to an infinite amount of information is a bad thing. Or, at minimum, that such access prevents us from having the ability to form useful, coherent understandings about the world as it is. Constantly standing in the stream of infinite information means constantly consuming disparate hot takes on whatever today’s events are, or whatever people are outraged about right now, or whatever entertaining meme or video happens to catch the eye. Further, infinite access means our attention is constantly disrupted, which therefore disrupts any chance we have of thinking deeply about one issue.

Finally, our final sentence: “Mostly from books.” Maybe this is an unfair one. The internet is extremely helpful in many ways; without it, many of us would not know many of the things we know now. And that includes understanding social and political issues in new ways. But let’s come back to our analogy — Michael Pollan is making an argument that most of our food that we eat should come from plants and not meat, animal products, or (presumably) refined and processed ingredients (such as refined sugar).

I’d like to focus on the sugar bit, because that’s the most likely candidate for making a connection. Refined sugars are particularly bad for us, and they are also particularly addictive (I’m not going to link to anything. A ten second Google search will prove me right). Sugar gives us a quick, easy burst of energy, but it often goes unused, and so our body stores that energy as fat. This leads to obesity, sluggishness, and a high likelihood of disease in a variety of forms. In the age of access to infinite information, the information we often have access to is no different than the sugary, highly processed, low-nutrient food that we all have constant access to. And that information is often consumed by us, and forms us so that we become intellectually sluggish and unable to think clearly or rationally about the world. Books (and other long-form literature), however, give us a chance at a different kind of intellectual formation. They demand our attention. They help us to train those intellectual muscles that otherwise become weak when our intellectual diets are pulled from social media feeds. Why? Because those feeds are bent towards outrage, and are actively grabbing at your attention, which ultimately leads to a “race to the bottom of the brain stem.” This doesn’t mean that all books contain and bequeath good quality knowledge. But I’d be willing to bet that books are more likely to properly form our intellects in ways that a pure digital diet cannot.

So: Gain knowledge. Not too much. Mostly from books.

Sanctified by Subjectivity

From “Reading in the Age of Constant Distraction” by Maireed Small Staid (I encourage you to read the whole article. It isn’t long, and there are some beautiful thoughts here):

Loneliness is what the internet and social media claim to alleviate, though they often have the opposite effect. Communion can be hard to find, not because we aren’t occupying the same physical space but because we aren’t occupying the same mental plane: we don’t read the same news; we don’t even revel in the same memes. Our phones and computers deliver unto each of us a personalized—or rather, algorithm-realized—distillation of headlines, anecdotes, jokes, and photographs. Even the ads we scroll past are not the same as our neighbor’s: a pair of boots has followed me from site to site for weeks. We call this endless, immaterial material a feed, though there’s little sustenance to be found.

And then, I loved this line from Birkerts, quoted in the piece above:

The book—and my optimism, you may sense, is not unwavering—will be seen as a haven, as a way of going off-line and into a space sanctified by subjectivity.

Sanctified by subjectivity — perhaps, as opposed to marred by objectivity and even objectification. Maybe my growing discomfort with the online world is that it is a space that is built towards understanding the what humans are in objective terms. That is, understanding humans algorithmically and biologically, rather than as subjective creatures. The online world is built around understanding human impulses as computer-like: push the right buttons, show the right images, and you can get a human to do whatever you want them to do. That’s probably true.

Unless we enter into a space that is “sanctified by subjectivity.”

A “Now” Page (or, On Post-Social Media Digital Life)

Given that I am not really using social media anymore, I haven’t really had a place to update what I’m up to right now. After coming across the idea of a “now” page on several other blogs, I’ve made one for myself. You can see it here. Of course, as I’ve made this space my home on the web, the “now” page is less for you than it is for me. It’s a good way of reminding myself what I’m trying to focus on right now, in case anything else decides to try to creep into my daily work that doesn’t really belong. It’s also a good way to push back against the “stream” version of the internet.

Ever since 2009 (or sometime around there), the stream — that never-ending, infinite-scrolling, time-sucking version of digital consumption — has dominated how we interact with the internet. It makes us passive consumers, rather than active participants. It neuters the internet from being what it was meant to be: a space for ideas, for gaining knowledge, for finding new things. The stream allows advertisers to control our attention in ways that seem benign, but which are really meant to subtly control our consumption habits.

So, the blog and my “now” page are my own little ways of pushing back against that. I don’t know that it could ever happen again, but it would be fun to move back to an internet where the hyperlink rules how we connect with one another. Where there are blogrolls instead of “friends.” Where my attention is mine and not taken from me.

Some additional reading/listening on the subject, in case you are interested:

“The Web We Have to Save”

“Cal Newport Has an Answer for Digital Burnout” (Podcast)

“Tending the Digital Commons”

 

 

On No Social Media

I gave up nearly all social media quite a while ago. I couldn’t even tell you when it was, exactly, because I started having my tweets deleted automatically after a week while I was still using the service.

Still, sometime in the last year, I have:

  • Completely deleted Facebook. (Which is really hard, by the way.)
  • Given up access to Twitter. (My wife knows my password, so I can’t login.)
  • Rarely logged in to Instagram. (Although that may be going soon too, after seeing the American Meme documentary on Netflix.)

It has been demonstrably good for my soul. I don’t have hard evidence of this, but I do know that I feel different. After finishing my thesis, I was able to spend my time relaxing with my kids over the holidays, reading several books (purely in a leisurely way — I read both Loeb/Sale Batman anthologies, and I’m currently in the middle of Harry Potter and the Order of the Phoenix to give my brain a rest from philosophy), and playing games with Elaine. It’s been really great. My screen time is down on my phone, and I’ve even decided to delete my Outlook app from my phone for work purposes. Reddit still sucks me in every once in a while, but it’s nothing like what Twitter used to do to me.

And I must say, as we head into a new election cycle (*gag*), I’m happy to say that my source of news won’t be the endless Twitter stream. I’ll probably be off of Reddit by then too, which will give me the chance to try to only get my news from diverse, reputable news sources. I’m thankful that I’ve started this process now. I wouldn’t call myself a digital minimalist, per se, but I’m happy that my digital choices feel a lot more like mine, and not like they’re being made for me.

If you have the guts to take the leap, even from one of the social media sites you use — just take it. I don’t think you’ll regret it.