
https://www.midjourney.com
Philosophy is eating AI: As a discipline, data set, and sensibility, philosophy increasingly determines how digital technologies reason, predict, create, generate, and innovate. The critical enterprise challenge is whether leaders will possess the self-awareness and rigor to use philosophy as a resource for creating value with AI or default to tacit, unarticulated philosophical principles for their AI deployments. Either way — for better and worse — philosophy eats AI. For strategy-conscious executives, that metaphor needs to be top of mind.
While ethics and responsible AI currently dominate philosophy’s perceived role in developing and deploying AI solutions, those themes represent a small part of the philosophical perspectives informing and guiding AI’s production, utility, and use. Privileging ethical guidelines and guardrails undervalues philosophy’s true impact and influence. Philosophical perspectives on what AI models should achieve (teleology), what counts as knowledge (epistemology), and how AI represents reality (ontology) also shape value creation. Without thoughtful and rigorous cultivation of philosophical insight, organizations will fail to reap superior returns and competitive advantage from their generative and predictive AI investments.
This argument increasingly enjoys both empirical and technical support. There’s good reason investors, innovators, and entrepreneurs such as PayPal cofounder Peter Thiel, Palantir Technologies’s Alex Karp, Stanford professor Fei-Fei Li, and Wolfram Research’s Stephen Wolfram openly emphasize both philosophy and philosophical rigor as drivers for their work.1 Explicitly drawing on philosophical perspectives is hardly new or novel for AI. Breakthroughs in computer science and AI have consistently emerged from deep philosophical thinking about the nature of computation, intelligence, language, and mind. Computer scientist Alan Turing’s fundamental insights about computers, for example, came from philosophical questions about computability and intelligence — the Turing test itself is a philosophical thought experiment. Philosopher Ludwig Wittgenstein’s analysis of language games and rule following directly influenced computer science development while philosopher Gottlob Frege’s investigations into logic provided the philosophical foundation for several programming languages.2
More recently, Geoffrey Hinton’s 2024 Nobel Prize-winning work on neural networks emerged from philosophical questions about how minds represent and process knowledge. When MIT’s own Claude Shannon developed information theory, he was simultaneously solving an engineering problem and addressing philosophical questions about the nature and essence of information. Indeed, Sam Altman’s ambitious pursuit of artificial general intelligence at OpenAI purportedly stems from philosophical considerations about intelligence, consciousness, and human potential. These pioneers didn’t see philosophy as separate or distinct from practical engineering; to the contrary, philosophical clarity enabled technical breakthroughs.
Philosophy is eating AI: As a discipline, data set, and sensibility, philosophy increasingly determines how digital technologies reason, predict, create, generate, and innovate. The critical enterprise challenge is whether leaders will possess the self-awareness and rigor to use philosophy as a resource for creating value with AI or default to tacit, unarticulated philosophical principles for their AI deployments. Either way — for better and worse — philosophy eats AI. For strategy-conscious executives, that metaphor needs to be top of mind.
While ethics and responsible AI currently dominate philosophy’s perceived role in developing and deploying AI solutions, those themes represent a small part of the philosophical perspectives informing and guiding AI’s production, utility, and use. Privileging ethical guidelines and guardrails undervalues philosophy’s true impact and influence. Philosophical perspectives on what AI models should achieve (teleology), what counts as knowledge (epistemology), and how AI represents reality (ontology) also shape value creation. Without thoughtful and rigorous cultivation of philosophical insight, organizations will fail to reap superior returns and competitive advantage from their generative and predictive AI investments.
This argument increasingly enjoys both empirical and technical support. There’s good reason investors, innovators, and entrepreneurs such as PayPal cofounder Peter Thiel, Palantir Technologies’s Alex Karp, Stanford professor Fei-Fei Li, and Wolfram Research’s Stephen Wolfram openly emphasize both philosophy and philosophical rigor as drivers for their work.1 Explicitly drawing on philosophical perspectives is hardly new or novel for AI. Breakthroughs in computer science and AI have consistently emerged from deep philosophical thinking about the nature of computation, intelligence, language, and mind. Computer scientist Alan Turing’s fundamental insights about computers, for example, came from philosophical questions about computability and intelligence — the Turing test itself is a philosophical thought experiment. Philosopher Ludwig Wittgenstein’s analysis of language games and rule following directly influenced computer science development while philosopher Gottlob Frege’s investigations into logic provided the philosophical foundation for several programming languages.2
More recently, Geoffrey Hinton’s 2024 Nobel Prize-winning work on neural networks emerged from philosophical questions about how minds represent and process knowledge. When MIT’s own Claude Shannon developed information theory, he was simultaneously solving an engineering problem and addressing philosophical questions about the nature and essence of information. Indeed, Sam Altman’s ambitious pursuit of artificial general intelligence at OpenAI purportedly stems from philosophical considerations about intelligence, consciousness, and human potential. These pioneers didn’t see philosophy as separate or distinct from practical engineering; to the contrary, philosophical clarity enabled technical breakthroughs.
Today, regulation, litigation, and emerging public policies represent exogenous forces mandating that AI models embed purpose, accuracy, and alignment with human values. But companies have their own values and value-driven reasons to embrace and embed philosophical perspectives in their AI systems. Giants in philosophy, from Confucius to Kant to Anscombe, remain underutilized and underappreciated resources in training, tuning, prompting, and generating valuable AI-infused outputs and outcomes. As we argue, deliberately imbuing LLMs with philosophical perspectives can radically increase their effectiveness.
This doesn’t mean companies should hire chief philosophy officers … yet. But acting as if philosophy and philosophical insights are incidental or incremental to enterprise AI impact minimizes their potential technological and economic impact. Effective AI strategies and execution increasingly demand critical thinking — by humans and machines — about the disparate philosophies determining and driving AI use. In other words, organizations need an AI strategy for and with philosophy. Leaders and developers alike need to align on the philosophies guiding AI development and use. Executives intent on maximizing their return on AI must invest in their own critical thinking skills to ensure philosophy makes their machines smarter and more valuable.
Google’s revealing and embarrassing Gemini AI fiasco illustrates the risks of misaligning philosophical perspectives in training generative AI. Afraid of falling further behind LLM competitors, Google upgraded the Bard conversational platform by integrating it with the tech giant’s powerful Imagen 2 model to enable textual prompts to yield high-quality, image-based responses. But when Gemini users prompted the LLM to generate images of historically significant figures and events — America’s Founding Fathers, Norsemen, World War II, and so on — the outputs consistently included diverse but historically inaccurate racial and gender-based representations. For example, Gemini depicted the Founding Fathers as racially diverse and Vikings as Asian females.
These ahistorical results sparked widespread criticism and ridicule. The images reflected contemporary diversity ideals imposed onto contexts and circumstances where they ultimately did not belong. Given Google’s great talent, resources, and technical sophistication, what root cause best explains these unacceptable outcomes? Google allowed teleological chaos to reign among rival objectives: accuracy and diversity, equity, and inclusion initiatives.3 Data quality and access were not the issue; Gemini’s proactively affirmative algorithms for avoiding perceived bias toward specific ethnic groups or gender identities led to misleading, inaccurate, and undesirable historical outputs. What initially appears to be an ethical AI or responsible AI bug was, in fact, not a technical failure but a teleological one. Google’s trainers, fine-tuners, and testers made a bad bet — not on the wrong AI or bad models but on philosophical imperatives unfit for a primary purpose.
As intelligent technologies transition from language models to agentic AI systems, the ancient Greek warrior/poet Archilochus’s wisdom — “We don’t rise to the level of our expectations; we fall to the level of our training” — becomes a strategic warning. When paired with statistician George Box’s cynical aphorism — “All models are wrong, but some are useful” — the challenge becomes even clearer: When developing AI that independently pursues organizational objectives, mere “utility” doesn’t go far enough. Organizations need more. Creating reliably effective autonomous or semiautonomous agents depends less on technical stacks and/or algorithmic innovation than philosophical training that intentionally embeds meaning, purpose, and genuine agency into their cognitive frameworks. Performance excellence depends on training excellence. High-performance AI is contingent upon high-performance training.
We argue that AI systems rise or fall to the level of their philosophical training, not their technical capabilities. When organizations embed sophisticated philosophical frameworks into AI training, they restructure and realign computational architectures into systems that:
- Generate strategic insights rather than tactical responses.
- Engage meaningfully with decision makers instead of simply answering queries.
- Create measurable value by understanding and pursuing organizational purpose.
These should rightly be seen as strategic imperatives, not academic exercises or thought experiments. Those who ignore this philosophical verity will create powerful but ultimately limited tools; those embracing it will cultivate AI partners capable of advancing their strategic mission. Ignoring philosophy or treating it as an afterthought risks creating misaligned systems — pattern matchers without purpose, computers that generate the wrong answers faster.
AI’s enterprise future belongs to executives who grasp that AI’s ultimate capability is not computational but philosophical. Meaningful advances in AI capability — from better reasoning to more reliable outputs to deeper insights — come from embedding better philosophical frameworks into how these systems think, learn, evaluate, and create. AI’s true value isn’t its growing computational power but its ability to learn to embed and execute strategic thinking at scale.
Every prompt, parameter, and deployment encodes philosophical assumptions about knowledge, truth, purpose, and value. The more powerful, capable, rational, innovative, and creative an artificial intelligence learns to become, the more its abilities to philosophically question and ethically engage with its human colleagues and collaborators matter. Ignoring the impact and influence of philosophical perspectives on AI model performance creates greater and greater levels of strategic risk especially when AI takes on a more strategic role in the enterprise. Imposing thoughtfully rigorous philosophical frameworks on AI doesn’t merely mitigate risk — it empowers algorithms to proactively pursue enterprise purpose and relentlessly learn to improve in ways that both energize and inspire human leaders.
Michael Schrage and David Kiron – https://sloanreview.mit.edu/article/philosophy-eats-ai/
1. L. Burgis, “The Philosophy of Peter Thiel’s ‘Zero to One,’” Medium, May 9, 2022, https://luke.medium.com; P. Westberg, “Alex Karp: The Unconventional Tech Visionary,” Quartr, May 8, 2024, https://quartr.com; F.-F. Li, “The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI.” (New York: Flatiron Books, 2023); and S. Wolfram, “How to Think Computationally About AI, the Universe, and Everything,” Stephen Wolfram Writings, October 27, 2023, https://writings.stephenwolfram.com.
2. M. Awwad, “Influences of Frege’s Predicate Logic on Some Computational Models,” Future Human Image Journal 9 (April 14, 2018): 5-19.
3. C. McGinn, “Intelligibility,” Colin McGinn, Dec. 14, 2019, www.colinmcginn.net.