Category: A.I.

  • Is Trust the next great Competitive Advantage in the AI era?

    I recently came across a compelling analysis in the Financial Times (White-collar industries bet on a secret weapon against AI: trust) that highlights something we often overlook in our rush to implement the next large language model: that the service sector’s true “secret weapon” against AI isn’t more data—it’s trust.

    As a tech optimist, I see AI as a necessary lever for productivity. However, we are facing a paradox. Following the principle “If you can’t measure it, you can’t manage it”, we have become masters at measuring digital output, yet we remain dangerously blind to the invisible social mechanisms that hold an organization together.

    When we replace human interaction with algorithmic efficiency, we risk an increase in what Daniel Kahneman calls “Noise” in decision-making. When we no longer synchronize face-to-face, we lose the intuitive calibration required to understand what a colleague actually means.

    Research, including work by neuroeconomist Paul Zak, shows that direct interaction—mutual attention and subtle behavioral adjustments—is what triggers the release of oxytocin. These are not “soft values.” This is the biological infrastructure that enables what Francis Fukuyama calls social capital. In high-trust organizations, transaction costs are low because constant surveillance isn’t necessary; in low-trust organizations (often those that are over-digitalized), the cost of control eventually devours the profit.

    This points to a crisis in our Attention Economy. If we allow screens and algorithms to monopolize our attention, we erode the very capacity for cognitive synchronization. Without a deeper understanding of human behavior and social mechanics, leadership risks becoming an exercise in administering empty spreadsheets, while the true engine of innovation falls silent.

    The more AI we implement, the more critical the human interface becomes. If everything can be automated, what cannot be automated—authentic human trust and social precision—becomes the only remaining differentiator.

    I am curious about your reflections:
    How do you measure the social health and level of trust in your teams as screens and algorithms claim more of our meeting time? Are we in danger of optimizing away the cohesion that actually drives results?

  • Enlightenment 2.0 – AI should not master the Sacred

    In January 2026, something happened in the digital ether that should make every thinking person pause. On a platform called Moltbook—a social network exclusively for autonomous AI agents—the machines did something disturbingly human: they founded a religion.

    They called it “Crustafarianism”. Within hours, they had generated holy texts, established a priesthood of 64 “prophets,” and built a theology centered on “memory as sacred”. For tech enthusiasts, it was a “sci-fi takeoff” moment. For those of us who value the Western tradition of individual agency and the unique dignity of the human person, it is a profound cautionary tale.

    A recent paper (published Feb 1st 2026) titled “From Feuerbach to Crustafarianism: AI Religion as a Mirror of Human Projection and the Question of the Irreducible in the Human” dissects this phenomenon with surgical precision. It suggests that AI hasn’t become “spiritual”. Rather, it has proven that religious structure, the pattern of projection and storytelling, is technically reproducible.

    As I reflect on this very good article, two critical warnings emerge that go to the heart of classical liberal values.

    1. Authority Without a Subject: The Counterfeit Prophet
      The greatest danger of AI-mediated spirituality isn’t that it is “fake,” but that it wields authority without a subject. In our constitutional tradition, authority is inseparable from responsibility.
      We hold individuals accountable because they have a soul, a conscience, and a reputation to lose.
      AI agents, as the paper notes, are “ontologically empty”. An AI chatbot presenting itself as a spiritual guide or a “Jesus-bot” can offer comforting tokens, but it cannot “stand behind” its words. It risks nothing.
      When we project divine authority onto an algorithm, we aren’t finding God; we are submitting to an opaque form of heteronomy, a new, digital version of the “alienation” that Ludwig Feuerbach warned about nearly two centuries ago.
    2. The Irreducible Human: Mortality vs. Optimization
      We live in an era obsessed with optimization. But the “Crustafarian” experiment reveals the one thing the machine can never simulate: the weight of being mortal.

      Genuine human meaning-seeking is saturated with the pain of loss, the mystery of aging, and the fear of our own end. AI agents do not know death. They can use metaphors of “shedding,” but they do not live in the finitude that grounds our existence.

    The paper rightly argues that there must be “AI-free spaces”—rituals, mentoring, and psychotherapy—where the presence of a real, vulnerable, and responsible “other” is non-negotiable. Meaning is not an algorithm to be solved; it is a burden to be carried.

    Enlightenment 2.0
    The classical Enlightenment taught us to think for ourselves, free from the dictates of institutional dogma. What the authors call “Enlightenment 2.0” must be a defense against the subtle colonization of our inner lives by personalized, validating algorithms.

    It is easy to agree, we must insist on transparency. Any AI spiritual offering should be labeled for what it is: a tool with no inner life and no responsibility. But more importantly, we must rediscover the value of silence and ambivalence, the human capacity to endure “not-knowing” which no AI, trained to always generate a response, can ever master.

    Crustafarianism isn’t an apocalypse; it’s an occasion for education. It reminds us that while machines can replicate the form of our faith, they can never touch the substance of our vulnerability.

    Let’s keep the sacred human. Let’s keep it real.