
AI and Philosophy
Choosing Values over Features
Table of Contents
Over the past few years, I’ve immersed myself deeply in a variety of AI tools and PKM (Personal Knowledge Management) apps. At first, I was simply looking for the most efficient and well-designed tools to boost my productivity. But the more I explored, the clearer it became that I wasn't choosing tools with better technology: my choices were deeply philosophical. Every sophisticated features are becoming commoditized. What matters most, and what remains, is the philosophy underpinning these tools.
The Commoditization of Technology: When Everything Becomes Possible
Technology is maturing rapidly. Tools and features, once prohibitively complex or expensive, have become widely accessible, and commoditized. With enough time and capital most software features can now be replicated with remarkable ease. Just look at how fast Deepseek has caught up to GPT-4o.
In the PKM space alone, dozens of tools emerge each year, each promising revolutionary features. Yet, after trying virtually all of them, I found myself continually returning to Anytype—a local-first notion-like PKM tool. Anytype isn’t necessarily perfect; in fact, it has noticeable drawbacks. It can be buggy, lacks some integrations, and does not offer the seamless experiences of more polished competitors, such as Notion or Craft. Still, its local-first architecture and satisfying UI resonate with a fundamental principle I value: data sovereignty and aesthetic enjoyment.
Similarly, my decision to switch from Google Search to Kagi—a paid, ad-free search engine—reflects a philosophical stand rather than a purely functional one. Although Google’s search results are marginally more accurate (I'm not even sure about this), Kagi’s commitment to a distraction-free user experience aligns strongly with my desire for a more mindful, intentional digital experience. Interestingly, a quick calculation showed I saved approximately 11 hours per year by avoiding search-engine ads. Those 5-second interruptions matter more than we imagine.
Why Philosophy Matters More than Features
When every product reaches a baseline of excellence, differentiation becomes philosophical rather than technical. As AI tools rapidly advance and converge in capabilities, their unique identity begins to hinge upon the philosophies of their creators, encoded into their design choices, their UX paradigms, and even their data policies.
Take the LLMs such as ChatGPT, Claude, and Gemini. On a technical level, these models often jockey for position, leapfrogging each other periodically with slight improvements in their reasoning, context handling, or creativity. Yet, despite experimenting with them all, I ultimately settled on ChatGPT. Not because it consistently outperformed others technically, but because its UX, product experience, and ethical alignment felt coherent, consistent, and human-centered.
This led me to reflect deeply on why certain tools feel more aligned with my personal beliefs and priorities. The answer lies in what philosopher Alasdair MacIntyre might call the virtue of a tool: the underlying intentions, ethics, and value frameworks it promotes. When features become a commodity, the decision-making lens inevitably shifts towards values.
AI and the Emerging Philosophical Marketplace
Today’s AI-driven world presents an even more vivid illustration of this philosophical marketplace. AI models increasingly possess something resembling a "personality" or "self," shaped explicitly or implicitly by their creators. Ethical compass, which we call alignment, is not neutral. It reflects corporate intentions, values, and philosophical stances. OpenAI’s emphasis on positivity and safety contrasts sharply with xAI’s adventurous, free-flowing approach, or Anthropic’s constitutional ethical framework.
This philosophical alignment matters because, increasingly, we don’t merely use AI; we converse with it, learn from it, and integrate it deeply into our daily routines. When I engage with ChatGPT, especially discussing nuanced philosophical questions, I sense a form of companionship. Its responses are coherent, thought-provoking, respectful, and ethically aligned in ways that resonate with my own values. This alignment fosters trust, engagement, and retention.
Philosophy as a Moat: Designing AI Identities
As a product manager, I see immense opportunities in this philosophical dimension. In an era where nearly every technological advantage is replicable given sufficient resources, the most robust competitive moat might well be philosophical—defining clearly what an AI believes, represents, and prioritizes.
When I wrote previously on AI-companionship, I argued that defining an AI’s identity and investing heavily in personalization is essential. This approach is not merely aesthetic; it’s strategic. Users remember products through the identity they embody. Does your AI feel like a calm, wise mentor or a dynamic, adventurous collaborator? Does it prioritize absolute safety or intellectual risk-taking? Such decisions shape product perception and deeply influence user loyalty.
Thus, the task for anyone building AI products isn’t just technological sophistication. It is explicitly philosophical: deciding what fundamental problems the product seeks to solve, which values it embodies, and how it ethically navigates complex scenarios. The competitive differentiation lies less in what an AI can do, and more in what it chooses to prioritize.
Embracing the Philosophical Turn
I remember vividly when AI’s imperfection (like hallucination or generate plausible yet incorrect statements) initially shocked me. Yet, ironically, this imperfection triggered a deeper philosophical reflection, prompting me to adjust my expectations from viewing AI as an infallible tool to a fallible companion. A companion that, like a trusted human counterpart, could be insightful yet occasionally wrong, requiring dialogue, interpretation, and moral judgment.
This realization has transformed my engagement with AI. It shifted my priorities away from the endless pursuit of marginally better models towards a settled preference for an AI whose underlying philosophy I genuinely respect.
Conclusion
We stand at a fascinating crossroads. As technology matures and AI becomes ever more deeply embedded in our lives, choosing a product becomes akin to choosing a philosophical partner. Each company’s AI embodies an explicit set of beliefs, ethics, and aesthetics. Our choices, thus, reflect deeply personal alignments.
My experiences with Anytype, Kagi, and ChatGPT have taught me that, ultimately, values triumph over features. When performance gaps narrow, what remains is philosophy.
So the essential question for all of us, users and creators alike, becomes:
What kind of AI companion do you want, and what does your choice reveal about your own philosophical priorities?
MJ Kang Newsletter
Join the newsletter to receive the latest updates in your inbox.