Free Notes
If I had a journal…
Slop and Institutions…
I'm not sure people really understand the potential scale of AI Slop and the role institutions will need to play. The only thing stopping a mass of people from trying to game platforms with AI Slop is an institution to say: “Hey, stop this! We are committed to a mission, and it doesn't include AI.” This is because the premise of artificial intelligence does not reach for the same aims that they do. If you don't have this, Substack, for example, will be overrun by people trying to essentially “do Substack” or “win on Substack.” What guards would be set in place by Substack to avoid this? How will they balance their need for growth with the “needs” of the users and the product.
Substack is a platform. It has no larger commitment to any kind of mission other than transference, monetization, and growth. It’s simply using the institutional ghost of media in order to achieve that growth. It doesn’t actually care about the underlying mission of journalism. It’s just renting it on a temporary basis. We might not realize this is happening until its too late.
“Institutions” don’t use anything in this sense. They are, to the contrary, the ones being used. Platforms sell you an idea that you can reach people, and that this reach is impactful. It’s generally not, though. Institutions “do” and “commit,” things that AI slop can’t touch in any meaningful way.
AI slop is simply the mirror image of platforms. Platforms are why AI even produces slop. AI, as we know, isn’t autonomous. It’s specifically trained in ways we can know and trace (by labs) but also at a larger, base level, by what training data is ever available at any given time.
Looking at AI Slop at MoMA…
Artificial intelligence is built by a group of people who have been looking for a “right” answer to things. They use data to optimize because the end point of thought, for them, is a final answer. It is a science. It is an attempt to find a solution to a problem. It rarely intermingles with art, in the most basic sense, because the humanities deal in open-ended problems, and … there's no “problem” per se. Duchamp is known for having said “There is not a solution cause there is no problem.” There's no reward for getting it right, because no one's really sure what the definition of right is. And really, all we’re doing is having a discussion about this. There’s no end point so why would be enlist machines for a map? Because it looks cool?
I don’t think people approach the question of AI and art structurally enough. They arrive by way of fundamentally different epistemic concerns. They only way they slip in together is by making an (incorrect) appeal to “AI” as a tool, and privileging the role of the artist as an unchanged user of this tool. This seems to be curiously at odds with the way that nearly all industrial commentators describe the potentials of artificial intelligence, be it the AGI religiosity, or the doomers who ascribe some kind of intent or autonomy to a take off scenario, where humans become just a part of the fauna in a fully digitized world. Which is it? Its power so potent that it thinks for itself, or is it a passive tool, like a camera? Is the camera a passive tool? I don’t even think that’s true.



