Tag: AI

  • Is Trust the next great Competitive Advantage in the AI era?

    I recently came across a compelling analysis in the Financial Times (White-collar industries bet on a secret weapon against AI: trust) that highlights something we often overlook in our rush to implement the next large language model: that the service sector’s true “secret weapon” against AI isn’t more data—it’s trust.

    As a tech optimist, I see AI as a necessary lever for productivity. However, we are facing a paradox. Following the principle “If you can’t measure it, you can’t manage it”, we have become masters at measuring digital output, yet we remain dangerously blind to the invisible social mechanisms that hold an organization together.

    When we replace human interaction with algorithmic efficiency, we risk an increase in what Daniel Kahneman calls “Noise” in decision-making. When we no longer synchronize face-to-face, we lose the intuitive calibration required to understand what a colleague actually means.

    Research, including work by neuroeconomist Paul Zak, shows that direct interaction—mutual attention and subtle behavioral adjustments—is what triggers the release of oxytocin. These are not “soft values.” This is the biological infrastructure that enables what Francis Fukuyama calls social capital. In high-trust organizations, transaction costs are low because constant surveillance isn’t necessary; in low-trust organizations (often those that are over-digitalized), the cost of control eventually devours the profit.

    This points to a crisis in our Attention Economy. If we allow screens and algorithms to monopolize our attention, we erode the very capacity for cognitive synchronization. Without a deeper understanding of human behavior and social mechanics, leadership risks becoming an exercise in administering empty spreadsheets, while the true engine of innovation falls silent.

    The more AI we implement, the more critical the human interface becomes. If everything can be automated, what cannot be automated—authentic human trust and social precision—becomes the only remaining differentiator.

    I am curious about your reflections:
    How do you measure the social health and level of trust in your teams as screens and algorithms claim more of our meeting time? Are we in danger of optimizing away the cohesion that actually drives results?

  • Enlightenment 2.0 – AI should not master the Sacred

    In January 2026, something happened in the digital ether that should make every thinking person pause. On a platform called Moltbook—a social network exclusively for autonomous AI agents—the machines did something disturbingly human: they founded a religion.

    They called it “Crustafarianism”. Within hours, they had generated holy texts, established a priesthood of 64 “prophets,” and built a theology centered on “memory as sacred”. For tech enthusiasts, it was a “sci-fi takeoff” moment. For those of us who value the Western tradition of individual agency and the unique dignity of the human person, it is a profound cautionary tale.

    A recent paper (published Feb 1st 2026) titled “From Feuerbach to Crustafarianism: AI Religion as a Mirror of Human Projection and the Question of the Irreducible in the Human” dissects this phenomenon with surgical precision. It suggests that AI hasn’t become “spiritual”. Rather, it has proven that religious structure, the pattern of projection and storytelling, is technically reproducible.

    As I reflect on this very good article, two critical warnings emerge that go to the heart of classical liberal values.

    1. Authority Without a Subject: The Counterfeit Prophet
      The greatest danger of AI-mediated spirituality isn’t that it is “fake,” but that it wields authority without a subject. In our constitutional tradition, authority is inseparable from responsibility.
      We hold individuals accountable because they have a soul, a conscience, and a reputation to lose.
      AI agents, as the paper notes, are “ontologically empty”. An AI chatbot presenting itself as a spiritual guide or a “Jesus-bot” can offer comforting tokens, but it cannot “stand behind” its words. It risks nothing.
      When we project divine authority onto an algorithm, we aren’t finding God; we are submitting to an opaque form of heteronomy, a new, digital version of the “alienation” that Ludwig Feuerbach warned about nearly two centuries ago.
    2. The Irreducible Human: Mortality vs. Optimization
      We live in an era obsessed with optimization. But the “Crustafarian” experiment reveals the one thing the machine can never simulate: the weight of being mortal.

      Genuine human meaning-seeking is saturated with the pain of loss, the mystery of aging, and the fear of our own end. AI agents do not know death. They can use metaphors of “shedding,” but they do not live in the finitude that grounds our existence.

    The paper rightly argues that there must be “AI-free spaces”—rituals, mentoring, and psychotherapy—where the presence of a real, vulnerable, and responsible “other” is non-negotiable. Meaning is not an algorithm to be solved; it is a burden to be carried.

    Enlightenment 2.0
    The classical Enlightenment taught us to think for ourselves, free from the dictates of institutional dogma. What the authors call “Enlightenment 2.0” must be a defense against the subtle colonization of our inner lives by personalized, validating algorithms.

    It is easy to agree, we must insist on transparency. Any AI spiritual offering should be labeled for what it is: a tool with no inner life and no responsibility. But more importantly, we must rediscover the value of silence and ambivalence, the human capacity to endure “not-knowing” which no AI, trained to always generate a response, can ever master.

    Crustafarianism isn’t an apocalypse; it’s an occasion for education. It reminds us that while machines can replicate the form of our faith, they can never touch the substance of our vulnerability.

    Let’s keep the sacred human. Let’s keep it real.

  • AI in Social Work: Balancing Technology and Human Touch

    AI robot as socialworker Copilot
    AI robot as socialworker Copilot


    I have read quite a bit about AI and how it is used in various fields, but mainly been focusing on social service. In service provision there are lots of possibilities to use AI. Chatbots of different kinds are probably the most obvious and apparent application. At least it is the one that users most easily notice. If AI is used in the processes of handling documents without real-time interaction, it is almost impossible for the client/customer/user to notice it. There are many uses that are invisible for the client, like how people in queue could be prioritized.
    Among others, I happened to read the article ”AI Chatbots in Social Services” in the Stanford Social Innovation Review (SSIR), which discusses the potential and challenges of using AI chatbots in social services. The article is worth writing about as it provides a good example of key questions.
    One of the most pressing is the responsible use of AI is of growing concern and interest.  This makes efforts like Z-inspection highly needed. Z-inspection is a globally recognized methodology for evaluating ethical risks and challenges related to artificial intelligence. The methodology is based on interdisciplinary collaborations and includes experts from various fields to ensure that AI solutions are responsible and ethically defensible. By using Z-inspection, organizations can better understand the ethical implications of their AI projects and work to minimize the risks of negative consequences.
    But back to the article in SSIR. It argues that AI chatbots can be a valuable tool but emphasizes the importance of using them responsibly to avoid negative consequences. This is certainly a point of view I can agree with. The article highlights the crucial role of human contact in social services and advocates for AI chatbots to complement, not replace, human interaction. This is likely one of the most important, if not the most important, reasons why AI is only a tool. One could perhaps express it by saying that the unique selling point (USP) for social workers and social work is explicitly the human contact. To see the individual as a whole and not just get stuck in processing a case automatically solely based on strictly mathematical (probability-based) rules.
    That said, one can still argue, as the article does, that AI chatbots are a valuable tool. The article highlights the potential of AI chatbots to streamline and improve social services. They can be used to provide information and support to those in need, automate tasks such as scheduling and case handling, and provide faster and more accessible service. The argument for AI is that this can free up resources and personnel, allowing for more time for complex cases and personal contact where it is truly needed.
    But the main point must still be the importance of responsible use. The  warns of potential risks and challenges. It emphasizes that AI chatbots can contain biases, lack the empathetic ability that is crucial in social work, and must not replace human contact. The argument is that careful design, testing, and evaluation are necessary to ensure that the chatbots are fair, reliable, and complement human interaction in a positive way. Data protection and ethical considerations in the use of AI are also highlighted.
    The conclusion is that AI chatbots can function as complements, not replacements, for social workers. The article underlines that human contact is crucial in social services for building trust and providing adequate support. AI chatbots should therefore be seen as a complement, not a replacement, to human interaction. The argument is that chatbots can handle simpler questions and tasks, giving staff more time for individuals who need more personal support and complex interventions. It is about finding a balance where technology can enhance but not undermine the human aspect of social services.
    In summary, the article emphasizes that AI chatbots have the potential to improve social services, but it requires a conscious and responsible approach. By focusing on ethical design, careful implementation, and above all, by seeing them as a complement to human interaction, AI chatbots can become a valuable tool to strengthen and streamline social services. This ties into why I have been interested in the work being done for the ethical use of AI, especially the work done within the framework of Z-inspection. I am happy to be a member of the Advisory bord.

  • DeepSeek – what’s next?

    What is the commotion about in a nutshell?

    Basically there are two things going on at the moment. One is the discussion on the Technological innovations of DeepSeek. The other discussion is on how the stock market has reacted to the news.

    Main Technological Innovation of DeepSeek

    1. Open-Source AI Models: DeepSeek has developed open-source AI models that offer comparable performance to top chatbots like OpenAI’s ChatGPT at a fraction of the cost.
    2. Efficiency and Cost-Effectiveness: The company’s AI model, R1, is highly efficient and cost-effective, challenging the belief that AI development requires vast amounts of power and resources.
    3. Advanced Reasoning Skills: DeepSeek’s models, particularly R1, demonstrate advanced reasoning skills, such as the ability to rethink approaches to problems, making them stand out in benchmarks like AIME 2024 and AlpacaEval 2.0.

    Market Reaction

    1. Market Turmoil: The release of DeepSeek’s AI models caused significant turmoil in the tech market, with major companies like Nvidia experiencing a massive drop in market value.
    2. Investor Concerns: Investors are concerned about the potential impact of DeepSeek’s technology on the future of AI and computing, leading to dramatic market fluctuations.
    3. Long-Term Implications: Analysts believe that DeepSeek’s advancements could lead to a reevaluation of costs, strategies, and research approaches by major tech companies, potentially altering the competitive landscape.

    DeepSeek AI Models Overview

    DeepSeek AI, a Chinese startup, has recently launched its groundbreaking AI models, DeepSeek R1 and DeepSeek R1 Zero. These models have quickly gained attention for their exceptional performance in various tasks such as coding, mathematics, multilingual processing, and reasoning. What sets DeepSeek R1 apart is its open-source nature and affordability, making it accessible to a wider audience. The model operates under the MIT license, making it significantly cheaper than proprietary models like OpenAI’s ChatGPT.

    Market Reaction

    The introduction of DeepSeek AI models has caused a significant stir in the tech industry. The stock prices of major tech companies, including Nvidia and other AI hardware manufacturers, have taken a hit due to the cost-effective nature of DeepSeek’s models. Nvidia’s stock, in particular, saw a dramatic decline, losing nearly $600 billion in market value in a single day. This reaction highlights the disruptive potential of DeepSeek’s models, which offer comparable performance at a fraction of the cost.

    Future Expectations on AI

    Looking ahead, DeepSeek AI models are expected to continue challenging the dominance of established players in the AI industry. Their affordability and open-source nature could democratize access to advanced AI technologies, enabling more developers and organizations to leverage these tools for various applications. Additionally, the ongoing advancements in AI and the increasing demand for efficient and cost-effective solutions suggest that DeepSeek’s models will play a significant role in shaping the future of AI development.

    So what can we expect from President Trump?

    DeepSeek is highlighting the competition on AI between USA and China.  Based on recent statements and actions, we can make an “educated guess” (if that expression is allowed) about how President Trump might react to the news about DeepSeek.

    Immediate Reaction

    President Trump has already commented on DeepSeek’s breakthrough, calling it a “wake-up call” for American industries. He acknowledged that DeepSeek’s AI model is not only faster but also much cheaper than its US counterparts, which he believes could be a positive development for America. This suggests that while he recognizes the competitive threat posed by DeepSeek, he also sees an opportunity for US companies to innovate and reduce costs.

    Long-Term Strategy

    Trump’s administration has announced a massive $500 billion initiative to boost the United States’ AI infrastructure, known as the “Stargate” project. This project aims to build advanced data centers and infrastructure in Texas, with key partnerships including OpenAI, Oracle, and SoftBank. Trump’s focus on increasing US competitiveness in AI and tech was a central theme of his speech, where he reassured American companies that the administration would push to reduce costs while still achieving high-quality results.

    Geopolitical Implications

    Trump’s stance on China has been complex and often fluctuates between aggressive and conciliatory. While he has previously taken a hard line on China, including imposing tariffs and criticizing Chinese tech advancements, his recent comments on DeepSeek suggest a more nuanced approach. He views DeepSeek’s cost-effective innovation as a potential asset for the US, indicating a willingness to learn from and compete with Chinese advancements. This could mean that while he continues to see China as a main opponent, he might also use this competition to drive further innovation and efficiency within the US tech sector.

    In summary, Trump’s reaction to DeepSeek is likely to be a mix of recognizing the competitive threat, using it as a catalyst for further investment in US AI infrastructure, and maintaining a strategic stance on China. This approach aligns with his broader goals of ensuring US technological dominance and economic efficiency.