Back to the Future
Philosophy Grads Sought for Careers in AI

“Philosophers with dim career prospects are in demand to research the ethics of data tech.” —Financial Times
EDITOR’S NOTE: We have all been curious about what kinds of jobs digital disruption will create that will mesh well with robots, AI, and automation…and humans. Well, the future has spoken and has just popped a well-paid, career-friendly job into view: philosopher.
Yes, you heard it correctly. Philosophy majors are now suddenly tumbling into vogue to deal with “the complex ethical issues involved in using artificial intelligence for policymaking.”
Now worried parents will have no problem encouraging their sons and daughters to be more like existentialist philosophers Jean-Paul Sartre and Simone de Beauvoir.
In her letter to the editor of the Financial Times, Helen Margetts from the Alan Turing Institute for Data Science and AI purposefully opens her letter with the stunning confession: “I spent a day this week interviewing philosophers.”
Here’s her letter explaining why the job is so important.
Helen Margetts is professor of society and internet at the Oxford Internet Institute, University of Oxford, and public policy program director at The Alan Turing Institute.
Helen Margetts
I spent a day this week interviewing philosophers. We need one to take up a postdoctoral position in ethics on our public policy program at the Alan Turing Institute for Data Science and AI. We need someone to help government navigate the complex ethical issues involved in using artificial intelligence for policymaking. Our current fellow is working nights and weekends. He has just written the UK government’s guidance on AI ethics and safety for the public sector.
We are not the only ones in the UK hiring. Similar positions are advertised at the Oxford Internet Institute, my home department at the University of Oxford, and countless other institutions.
The career prospects of philosophers were not always this promising. Only a few years ago, discouraged undergraduates would complain that despite the intellectual rigor of their degrees, the only interview preparation they had to do after graduating was practicing how to ask: “Would you like fries with that?”. Even newly minted PhDs struggled in the job market. Academic openings were few. Now, these same philosophers are in high demand to research the ethics of data and technology.
The interest of so many philosophers in technology is also new. The social sciences and humanities were slow to recognize the importance of technology, either for what they studied or how they studied it. This week’s announcement of a £150m donation to fund a new humanities hub for Oxford plus an institute of ethics and AI, was hailed as a sign of how much has changed.
As a budding political scientist, I wrote my PhD about information technology in government in the 1990s. At the time this was a sad and lonely thing to do; neighbors in the London School of Economics dining room would cast their eyes around for a “normal” political scientist who could pass on some Westminster gossip. Then along came the internet, the first IT to be domesticated in our social, economic and political networks, followed by huge digital platforms based on processing massive quantities of data generated by human interactions, and the current AI revolution. I have a better time at dinner parties these days.
Data-intensive technologies — usually, and often inaccurately, labelled artificial intelligence — are having a profound effect on research and challenging the boundaries of the fields that feed the pipeline of technology specialists.
Those who are expert in manipulating huge data sets and complex networks — engineers, physicists and computer scientists — have traditionally trained on data relating to materials, cells, atoms, bits. When they enter the huge industry that has grown up around platforms of Facebook, Amazon, Google, most data they deal with relates to people. But the academic training ground for understanding human behavior is social science — including economics, anthropology, sociology and political science — the concepts and methods of which bore most engineers to tears.
These technologies provoke a whole raft of new ethical issues and dilemmas. They can reduce the transparency and accountability of business processes and decision-making, requiring frameworks to ensure trust. There are issues of privacy and rights connected with personal data. Machine-learning algorithms can introduce bias and discrimination. Resolving such issues requires an approach grounded in ethics and an understanding of what causes bias in the first place — traditionally the province of sociology.
Imagine These Two Working an LLM?

Conversely, social scientists can be ill-equipped to research a society in which digital platforms are embedded. Such platforms offer exciting possibilities for my discipline, which has traditionally been based on surveys about what people think they might do (like vote in an election tomorrow) or what they think they have already done (but may not remember), rather than today’s huge banks of real-time data. But to use such data requires expertise that can only be provided by diverse research teams.
When the University of Oxford created my department, a multidisciplinary environment devoted to research and scholarship on the relationship between society and the internet — and now AI — it was prescient and brave. There could be an economic as well as a moral imperative for other bold moves.
MIT Sloan School’s take on philosophy in AI
“Even as software eats the world (Andreessen) and AI gobbles up software (Huang), what disruptor appears ready to make a meal of AI?
“The answer is hiding in plain sight. It challenges business and technology leaders alike to rethink their investment in and relationship with artificial intelligence. There is no escaping this disrupter; it infiltrates the training sets and neural nets of every large language model (LLM) worldwide. Philosophy is eating AI: As a discipline, data set, and sensibility, philosophy increasingly determines how digital technologies reason, predict, create, generate, and innovate.
“The critical enterprise challenge is whether leaders will possess the self-awareness and rigor to use philosophy as a resource for creating value with AI or default to tacit, unarticulated philosophical principles for their AI deployments. Either way — for better and worse — philosophy eats AI. For strategy-conscious executives, that metaphor needs to be top of mind.
“While ethics and responsible AI currently dominate philosophy’s perceived role in developing and deploying AI solutions, those themes represent a small part of the philosophical perspectives informing and guiding AI’s production, utility, and use. Privileging ethical guidelines and guardrails under values philosophy’s true impact and influence. Philosophical perspectives on what AI models should achieve (teleology), what counts as knowledge (epistemology), and how AI represents reality (ontology) also shape value creation. Without thoughtful and rigorous cultivation of philosophical insight, organizations will fail to reap superior returns and competitive advantage from their generative and predictive AI investments.
“This argument increasingly enjoys both empirical and technical support.”
by Michael Schrage & David Kiron
And Now for the "Phil Gigs"
Given the Sloan School’s take on Philosophy, how about a dozen specific jobs where philosophy grads/philosophers can send a resume and reasonably expect to get an interview, and a high probability of a job?
Here is a gig list of 12 career paths for philosophy majors in AI and Generative AI (GenAI), leveraging their unique skills in ethics, logic, critical thinking, and interdisciplinary analysis
Each of these gigs capitalizes on philosophy majors’ strengths in ethical reasoning, analytical rigor, and holistic thinking, positioning as vital contributors to the responsible evolution of AI technologies. And above all, please remember that last bit: You philosophers ARE IMPORTANT TO THE REST OF US!
Now, get out your resume!
- AI Ethics Consultant
Advise organizations on ethical AI development, addressing bias, privacy, and societal impacts. - AI Policy Analyst/Governance Specialist
Shape regulations and policies to ensure responsible AI deployment across industries and governments. - AI Product Manager
Oversee AI product development, balancing technical feasibility with ethical and user-centric design. - AI Logic Engineer
Design and refine AI algorithms, applying formal logic and reasoning frameworks to improve system accuracy. - AI Transparency Advocate
Promote explainable AI (XAI) by making AI decision-making processes understandable to users and regulators. - Cognitive Architect
Develop AI systems that emulate human reasoning, informed by philosophy of mind and consciousness studies.
- AI Ethics Educator
Teach ethical AI practices in academic or corporate settings, fostering responsible innovation. - AI Communication Specialist
Translate complex AI concepts into accessible content for non-technical audiences (e.g., technical writing, PR). - AI & Philosophy Researcher
Conduct interdisciplinary research on topics like AI consciousness, moral agency, or epistemology of machine learning. - Human-AI Interaction Designer
Design user experiences that align AI interfaces with human values, ensuring ethical and intuitive interactions. - AI Legal Analyst/Compliance Officer
Navigate AI-related legal challenges, such as liability, intellectual property, and regulatory compliance. - AI Futurist/Strategic Foresight Analyst
Analyze long-term societal impacts of AI, guiding organizations on emerging trends and ethical foresight.