Should AI Decide What We See Online, Based On Our Values?
My response to Elle Grffin's “Could AI make us wise?,” in The Elysian Substack, and a regurgitation. Skip to the regurgitation if you're short on time.
In a recent post at The Elysian Substack entitled “Could AI make us wise?,” Elle Grffin (@ellegriffin) suggests that training an AI to curate the best content for us, based on our values, could lead to a better experience of the Internet. Ultimately, this would make us smarter and better humans, or more wise:
“But what if we created an arena of virtue instead? Where the only options on the internet were healthy ones? What if the internet served only the most quality news sources, the most trusted and intelligent individuals, the most enlightened posts and the most thoughtful Twitter discussions? What if we were connected with the most beautiful craftsmanship from Etsy, and refurbished products from curated thrift shops? What if we could connect to friends and loved ones who read the same books and have the same values?”
No doubt, we are all tired of being taken down conspiracy theory rabbit holes and true crime drama sagas, and if most of what is on the Internet disappeared tomorrow, surely we would be better off. I agree with that much! But I don’t think AI (read: Large Language Models, or LLMs) is a good solution or partner in the task of curating our experiences, online or otherwise, and below I copy the long comment I wrote explaining one line of reasoning:
“Let’s start from the assumption that we would not want our values to be imposed upon us from the outside. This is a risk we would run should we build AI out to curate our experiences on the Internet, but let’s set that aside for now.
Now, imagine some value that you hold to be a value for yourself. Or even better, imagine someone that you know, and the things they seem to value. Now imagine that they act in a way that seems to go against their self-declared values. You would immediately go find a journalist so that they could write a piece about this completely anomalous situation, where a person went against their values! 😉
I’m joking, of course, because we do tend to go against our values all the time we act in ways that are contradictory. We're complicated. We can be confused about what we value, and what we think we value can be wrong, or change over time. If we have AI LLMs reinforcing what we have told it, or shown it that we value through actions, it would be all that much harder to change course. With a technology like LLMs and algos that lack transparency, it will be impossible to change, adjust, or otherwise control it.”1 We as humans are adaptable, and we need to be able to change. Computers, algos, and LLMs, not so much. Or, at the very least, we are putting our adaptability in someone else’s hands, i.e., those who create and maintain the LLMs.
The problem is not really that we don’t have the ability to curate content for ourselves while online — we could choose to only consume quality content — but probably that we lack the will to do so, and these companies actively work against our choices. I am myself guilty of indulging in rabbit-holing, burning hours of my life and way too many brain cells, and I do this even though I know it’s not great for me. I know how manipulative these social media platforms are, continuing to serve us up garbage, while triggering our inner-addict to keep us engaged. Who was it that said not only do we know we are being manipulated, but we even ask for moar! moar!… I think it was the postmodern condition guy, Jean-François Lyotard. In any case, it’s probably a human problem, not a data processing problem.
It’s also a capitalism issue, because even the best of tools in the context of hyper-capitalism is going to become a tool for the established powers to extract more value. In the early days, the Internet itself was hailed as democratically revolutionary, because it was going to democratize knowledge, and it was going to help shift power to the people.
One of the early advocates of the Internet's democratizing power, John Perry Barlow, articulated a vision of this potential in his widely recognized manifesto, “A Declaration of the Independence of Cyberspace,” penned in 1996. In it, Barlow famously declared:
“We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth. We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.”
There was so much optimism back then… and look at what we have actually built for our internet: a very big mall with a lot of meaningless choices with which to exhaust our executive powers. The mall doubles as a bully’s dream playground, or a boxing ring, depending on who shows up. It’s also a happiness machine that substitutes friendships with para-social relationships, and helps us forget our very real alienation - from others, from nature, from our work and our very selves. And sadly, in retrospect, I think it was inevitable. Same fate is likely to befall LLMs. Already most people just want to figure out how to make money with LLMs, including myself.
But I wanna know what you think, so slap some keys into the comment section below.
A Regurgitation - After Thinking About This for a Few Days, I Got a Crazy Idea
Now, for the crazy idea. You know how you get on Youtube or TikTok or any such social media platform (with a very few notable exceptions), and it “suggests” content you might like? They own the algorithm, and the data you create through your online behavior is used to give you these “suggestions” with the ultimate aim to shape your behavior online and off, and make you more predictable. What if you owned your data and that data can only be shared by you. You also own an algorithm that you train (lets set aside the question of how to train your personal algo), and you lease that access to your attention and data to these platforms. So you arrive at, say whatever decentralized or federated version of Youtube the future brings us, and you tell it what you want to see and hear? If we play our cards right, we could control not only our data, but own the algos through which we interact with the collective unconscious-but-seemingly-more-consciouss-every-day of the internet?
I think this may be a variation on Griffin’s vision in her piece? If so, let’s figure it out. It would basically be a layer between you or your device and the internet, like a VPN service, that shapes back at the shaping of content that these current platforms want to serve up for us. We could say, no thanks YT, I don’t want any more right wing conspiracy theories, yes I know I watched Loose Change many years ago but it’s not what I want. No, I am not a guy just cause I like Philosophy, stop sending me their things. You do your little clickity-clackity work and find me something I want to really see or I wont accept your landing page. So there.
Speaking of which, check out Kwaai. I have no relationship to them, just discovered them a few days ago through a family member that sent this to me:
https://www.kwaai.ai/
Originally written with speech-to-text, edited for clarity.
I like your suggestion! And it definitely seems like a variation on the theme.
As to your point: “The problem is not really that we don’t have the ability to curate content for ourselves while online — we could choose to only consume quality content — but probably that we lack the will to do so”
I don’t actually think the problem is that we aren’t curating things for ourselves because we lack the will to do so. As you point out: we did this on the early internet and it was largely great! The problem is that the early internet was free. And when we needed to figure out a way to monetize the internet we landed on monetizing our attention (which is why we introduced algorithms). But I think we could monetize the internet without monetizing our attention. We just need to change what wins the algorithm!
Yes, Facebook assumes dog food and bedside tables lie at the core of my being. AI can define the nature of our existence, the essence of our individuality. It can reduce us to a 'useless passion', to quote Sartre's famous words. It really does become a question of self mastery, an assertion of the will. And the trouble is that many live in a state of stupefaction.