Common Sense
The worst kept secret in the world

There are really just two types of people in the world. One seeks to make everything as efficient as possible, and measures life accordingly. The other recognizes that a degree of inefficiency can itself be generative; detours, friction, and ambiguity often play a vital role in production itself.
The first mindset optimizes toward a defined objective, sometimes relentlessly and at the cost of context. It very often misses the forest for the trees.
The second is no less ambitious; it simply acknowledges that meaningful achievement draws on a wider field of considerations. They recognize that the human is at the center of our world, and this creates beautiful constraints.
Software naturally appeals to the first disposition. Writing code or formulas requires precision. Software cannot ever be ambiguous. Uninterpretable statements must be resolved into binaries, logic, syntax, and executable plans. For many domains, this works, mathematics being the clearest example. Software can be understood as an attempt to apply mathematics to media: structured logic on top of a communicative media. Hello, world. Do this.
Since the rise of the ubiquitous computer, Silicon Valley asked whether this logic could be extended beyond simple digital media—whether social institutions themselves could be rendered programmable and optimized for profit. In many cases, they succeeded. Platforms reorganized communication, commerce, and culture around metrics and efficiency.
Now artificial intelligence represents the new frontier in the same field. Advocates assume that the success of digitizing social processes will be repeated on the act of cognition itself. They believe, and hope, that the texture of everyday thinking, creativity, and judgment can be automated and optimized in the same way. Yet this new frontier resists this simple reduction. Human life is not merely information processing; it is embodied, relational, and limited by constraints. Many people intuitively can feel that these constraints—those irreducible and ineffable particularities of human experience—are productive in themselves. This intuition is often called humanism. It has appeared in different forms across history and continues to reassert itself whenever instrumental logic threatens to eclipse the human scale.
Allow me a brief turn into the historical context of our present AI panic.
There is (was?) a kind of worker called a software engineer. They take instructions and implement them into computers. They think that the thing they invented, perhaps their singular achievement (which takes instructions and puts them into computers) will replace all work. This is because from their viewpoint the world is just taking instructions and putting them into computers. When they go to sell the software, they say “your business is taking instructions into computers.” When they say they don’t need human employees it’s because they think all humans do is take instructions and put them into computers. Most people aren’t primarily giving instructions to computers. Most people do many more things.
We’ve started to observe a paradox of AI adoption. The class of functions that fear that their value can be most easily automated are also the most ardent adopters. Just take a quick look at the vast army of middle managers posting to linkedin about their weekends with claude code. On the other side, you have a class of people who have been accused of “ignoring” AI. “They are not doing anything to prepare! They smugly dismiss it. They will be left behind”, is the charge. These people sit at the other end of this paradox. They are content that their contributions cannot be competed out of view by math, models—no matter how powerful—or agentic intent. This is because computers can not sense, they cannot be aroused, they cannot love, they cannot want nor long, they cannot change their mind, they cannot ever know or fail or live or die. A vast swath of human institutions rest on this silent substrate. It’s silent no more.
Our biggest challenge remains that for obvious reasons capital tends to favor efficiency. Systems that reduce cost, accelerate output, and scale easily attract investment. Social and political power tends to follow capital. But in the face of this the humanists will resist. They will resist based upon their holding of the worst kept secret in the world. When you optimize, you get a quick win but a longer, much costlier loss. Using AI to write your email, avoid thumbing the archives, or to answer a question you and your community can happily answer will commit a solemn break with what it means to be meaningful in the world. The question, then, is not whether short term efficiency will advance, but how to shape institutions, norms, and incentives so that efficiency serves human ends.



Correct. AIs biggest risk besides pure cyber war crimes, is its ability to collapse the sense of a meaningful work at a feeling of warp speed relative to humanity's overall timeline, and we just haven't developed a way of existing in society where work isn't by and large necessary for humans to do