The Machines We Build, the Nightmares They Make of Us

While I am on hiatus from Twitter, I have not left the Internet, and it’s the perfect opportunity to use spare moments of time to turn some Twitter threads into actual blog posts. This is one of them. It is about fragmentation, social media, and (ultimately) how some things happening in our society today have some troublingly apt parallels to concepts in Platonism.

The post begins with a discussion of a few recent podcast episodes from Your Undivided Attention, which focuses on algorithms, machine learning, AI, and how they’re ripping our social fabric apart due to a lack of oversight and social responsibility. The creators of Your Undivided Attention are experts, and they interview more experts. Tristan Harris and Aza Raskin are two of three cofounders of the Center for Human Technology, with backgrounds at the Silicon Valley companies that got us all into this mess in the first place. Their interviewees are people from academia, government, and industry who have voluntarily or forcibly been shoved into dealing with this modern parasocial nightmare.

Few people were clicking on those tweets. That disappointed me — I had decided to share these episodes out of social responsibility because everyone should know. (BTW: They have transcripts.)

The reason I’m pushing that podcast — and pushing it hard — is that many people are like, “Cool, this is a problem for the other side, I’m totally immune to it because my [system of belief] is magical.”

And you’re not.

I’m not.

Nobody is.

We are all human, and the limit of human rationality is the human brain itself.

Certainly, the reason I care about this is partly professional, and none of you knows me in a work context. Even before the 2016 elections, academic librarians have been testing how to teach things like search engine and information system bias to students. The elections were a bit of a shock, and the professional conversation started shifting — we have a lot of theoretical and practical professional literature that few outside of our field read, which is frustrating because the tech and media industries really should — and we are doing what we can.

Library science is an information science, so what happens in consumer information/data is very related to my field, even when the connections are not obvious. As an information professional, I know that algorithms, machine learning, and AI could easily fill out a SWOT analysis diagram. The biggest threats, in my opinion, are (a) that we are innovating too quickly to analyze if the impacts of some of this new tech are safe enough and (b) that access to abundant information is making a large number of people bypass stable information systems and consultations with actual experts that are vital for reality checks.

And I’m not anti-tech. I only spend about 0.2% of my work life touching physical books, and half of that is when I am showing science students what a bound journal used to look like.

Many people in the sciences use search engines and recommendation algorithms. There are a lot of machine learning and AI startups that are marketing to libraries right now to bring some of that in-house. I’m suspicious of their effects — if implemented, will people write to reward the AI discovery layer so they can get cited more, or will they write what is important regardless? 🙃 (Arguably, that already happens in some fields because people write grants based on what they can get funded.)

From a consumer POV, as people using “free” tools (not really free — we’re selling our attention to use Twitter, actually, because they run ads), we can advocate for better regulation. The first step is to be informed of the alarming normalcy of our susceptibility.

The episode I heard that prompted the original thread was with a former YouTube developer, Guillaume Chaslot — see the embedded media player above (or click here if the episode doesn’t show). Chaslot kept talking about how the emphasis on maximizing “time on device” (similar to what happens in gambling, FYI) made his managers ignore that the algorithms were funneling everyone towards more extreme content.

This “funneling towards extremism” phenomenon happens on every platform; once you know it, you can sort of cognitively resist — but as the person being interviewed points out, you are up against the power of a computer that can outsmart top-class chess players. None of us can win.

Y’all, we are in a cave. Here are some quotations I want you to see.

“That we’re sort of A/B Testing our way with the smartest supercomputers pointed at our minds to find sort of the soft underbellies to just be like, what’s effective? And so our A/B testing our way towards anti morality or immorality or amorality.” (YUA #4 Transcript, p. 3)

G: So we blame ourselves […]
T: You should have your self-control.
G: Yeah. […] You are bad person, but you have a supercomputer playing against your brain […]. And it will find your weaknesses. It’s already studied the weaknesses of billions of people, it will find them. (YUA #4 Transcript, p. 4)

The entire section of the conversation about how YouTube recommends increasingly extreme content because it profiles conspiracy theorists and people with extreme views as “ideal viewers” b/c they tend to use the platform more (again, maximizing time on device) is so stomach-churning. I see this happening all of the time, and the more one knows about how these systems work, the more one can identify it everywhere. (Human brains are way too good at patterns, am I right? 😂)

Ultimately, none of these episodes (and ultimately almost none of the other negative press about machine learning, AI, and algorithms) is saying Twitter, YouTube, Facebook, &c. are bad. These platforms can do a lot of good things. The question is more about what we should be doing to impose regulation and ethical norms to ensure that the algorithms lead us towards the good, not to division.

If I wanted to relate it to things people follow this blog for, it’s like what I’m reading about in Iamblichus, Proclus, Hermias, &c. about division/generation into Matter and its converse — reverting back to the One via a process of purification, cultivating virtue, and the like. Just under two years ago now, I wrote on social media, Zagreus, and the mirror; I would be harsher about social media now, I think, especially with the metaphorical space that is now available to me because I’ve read deeper into the Platonists. Algorithms want to divide us into market segments and keep us on-device. In a Platonic sense, too, the world we live in is a place of division; instead of being kept on-device, we’re being kept in-body, and the system is designed to bind us to that device even tighter. Furthermore, the things we create (like algorithms) — at least those that do not have images that descend down from the divine — are like images of images, faint shadows of reality.

So many things in this world that can either further division or its converse; we owe it to ourselves to establish the ethical and moral guardrails in our tech to make sure they lead to better places, not worse.

Ultimately, this: we all have vulnerabilities within ourselves.

They are the things that people told us and did to us when we were younger, the negative self-talk we give ourselves under stress, and even the cultural bruises we acquire when we feel out-of-place and unheard.

What divisiveness does, and what division cultivates, is a slow peeling open of these places to make them hurt more. And then, in the process of those algorithms that move one towards an extreme perspective, the thing that is harmful feels like a panacea, but it is not the balm that one needs.

This can happen directly, as when someone goes into a specific web culture and relies on recommendations for all le knows, but it also happens indirectly. A cultural zeitgeist experienced in person can indeed just be a bunch of people being pulled apart by this divisiveness all in the same way — they find one another and then draw the line between themselves and the rest of the world. Ultimately, they often end up hurting the people they’ve identified as an outgroup enemy or set of enemies.

This is quite complicated and disturbing. People love community, and they thrive in it, but the type of community-building that comes about in a positive environment has a wholly different character. It does NOT rely on opening those wounds; it heals directly and makes one better. Division-formed communities are a lie.

Since I was a child, I’ve seen the power of such lies — they can be very expedient. Instead of having to find something in common among people who are radically different, one just creates the aforementioned common enemy and goes after lim (or them).

It takes a lot of emotional and social labor for positive community to develop. These algorithms are incredibly surgical in how they work because without even thinking, they have “learned” that the way of division is expedient. Because I’m suspicious, and because I was bullied growing up, I am literally always asking myself if the people I’m around are people I feel affinity with for positive reasons or if it is an illusion caused by us having a mutual antagonist.

Working against these dynamics — especially when our modern Internet tools operate this way by default — is an enormous struggle. The parallel processes of division and reunion that I mentioned above in the Platonists create a smooth analogy here.

In a digital sense, working against this means going out of one’s way to see myriad “rivers” of division and opinion (after Platonism, I cannot see rivers in the same way I once did ever again) and how they connect back farther up the current. It means contextualizing against a larger picture that is noisy and that requires holding multiple conflicting things in one’s head at once.

Holding multiple conflicting things in one’s head at the same time is rather like the Forms in Late Platonism (i.e., Proclus), where Forms that are in division against each other in matter are not so much like that when they’re traced up a level or several, where opposites can be held ensemble. Proclus discusses this when talking about Likeness and Unlikeness, Sameness and Difference, and the like in his Parmenides commentary. It’s a lot closer to how truth actually works, yet is harder to grasp.

But also, looking to Hermias’ notes on Syrianus’ lecture on the Phaedrus — specifically 79,1-ish — the specific way in which algorithms drive people must be related to the irrational soul, “the purifications of which are for their part effected through moral philosophy or the assistance of the gods.”

I remember during my worst day of using Facebook before I quit it that it was a lot like a howl in my head and a pit in my gut that would never relax. I was so on edge from social media back then that it impacted my sleep and the anxiety just wouldn’t let me focus. Right now, I’m taking a break from Twitter until January 2020 to focus on work and creative projects for similar reasons — the stress of using the platform was compromising my performance, and going into the fall semester, something had to give so I can continue to meet my outside-of-work project goals. Why wouldn’t I choose to leave the most stressful thing? Most people I want to stay in contact with have my email address.

Social media also does a double-whammy because the ways that people internalize the false beliefs put in their heads by this hyperdivision are impurities of the rational soul — the division creeps in through the emotions and latches onto the mind. Here’s Hermias a few lines later —

“And there is also pollution of the rational soul, [which occurs] when [the soul] internally reaches false conclusions on the basis of false beliefs and [then] comes out with a lot of nonsense and false thinking. Refutation is the purification for these [latter], and philosophy, and above all the assistance of the gods, which perfects the soul and leads it to the truth, can drive them off.”

The text goes on to provide information about the levels of clued-in people are about the pollution they have, with examples of who was polluted and how le did (or didn’t) deal with it.

Contextually, the passage is analyzing what’s really going on when Socrates realizes he’s just shit-talked Eros and needs to account for it — this is a key dramatic moment in the Phaedrus. The commentary goes into a general discussion of purification in a variety of contexts. It’s a very interesting read, and I recommend buying it once the paperback is released and the price goes down, which will be soon.

In the context of how algorithms radicalize us online and coax people into digital, analog, and digital-analog hybrid communities that all funnel into states of extreme toxicity, a big part of why this is such a problem is that people made these algorithms to maximize Time on Device. I’ve mentioned Time on Device before, but I recommend checking out an episode in Your Undivided Attentionthere is a two-part interview with someone who studies gambling on how social media apps are like slot machines.

In brief, though: Social media companies want money, especially the ones that are publicly traded companies.

Which means that there is a core of greed at the heart of all of this, like a vast mud pit (Hermias mentions mud, so I’m gonna roll with it) that people are wading through in our souls and minds without even realizing that we’re getting dirty. We track it into our homes and hug people with it clinging to us. The mud gets everywhere. The solution to the mud problem in our communities is to not have a mud pit, which is why regulation is important for these hyperdivisive social tech tools and whatever gets brought out of techno-Tartarus next. But we have to mind ourselves while we advocate.

The bottom line of this, though, is to watch your mind and your emotions and be vigilant about knowing yourself and your values. It’s not good enough, but it is all you can do.

In January 2018, I wrote another post on social media. I’d like to close by quoting myself.

In the Enchiridion, Epictetus says that we control opinion, pursuit, desire, aversion, and our own actions. How true is this in an era when social media exploits the vulnerable cracks in our human psyches? What of ourselves do we actually control? How much of what we think of any issue is innate, and how much is shaped and controlled by advertisers or algorithm engineers? They shape our opinions. They shape our pursuits, desires, and aversions. Our opinions and perspectives lead to our actions in the world.

We control our attention, at least in the beginning.

The places we decide to interact impact which opinion-shaping algorithms we expose ourselves to.

Once we are in an online space, the notifications, the likes, and so on are designed to make us value the platform and keep us there. So, if we control our attention in the beginning, doesn’t it make sense to make value decisions on where we should spend our time before the dinging starts?

♨️📿♨️

6 thoughts on “The Machines We Build, the Nightmares They Make of Us

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s