Back to the Future
Philosophy Grads Sought for Careers in AI!
“Philosophers with dim career prospects are in demand to research the ethics of data tech.” —Financial Times
“I spent a day this week interviewing philosophers.”
— Helen Margetts, Alan Turing Institute for Data Science and AI
EDITOR’S NOTE: We have all been curious about what kinds of jobs digital disruption will create that will mesh well with robots, AI, and automation…and humans. Well, the future has spoken and has just popped a well-paid, career-friendly job into view: philosopher.
Yes, you heard it correctly. Philosophy majors are now suddenly tumbling into vogue to deal with “the complex ethical issues involved in using artificial intelligence for policymaking.”
Now worried parents will have no problem encouraging their sons and daughters to be more like existentialist philosophers Jean-Paul Sartre and Simone de Beauvoir.
In her letter to the editor of the Financial Times, Helen Margetts from the Alan Turing Institute for Data Science and AI purposefully opens her letter with the stunning confession: “I spent a day this week interviewing philosophers.”
Here’s her letter explaining why the job is so important.
Helen Margetts is professor of society and internet at the Oxford Internet Institute, University of Oxford, and public policy program director at The Alan Turing Institute.
Helen Margetts
I spent a day this week interviewing philosophers. We need one to take up a postdoctoral position in ethics on our public policy program at the Alan Turing Institute for Data Science and AI. We need someone to help government navigate the complex ethical issues involved in using artificial intelligence for policymaking. Our current fellow is working nights and weekends. He has just written the UK government’s guidance on AI ethics and safety for the public sector.
We are not the only ones in the UK hiring. Similar positions are advertised at the Oxford Internet Institute, my home department at the University of Oxford, and countless other institutions.
The career prospects of philosophers were not always this promising. Only a few years ago, discouraged undergraduates would complain that despite the intellectual rigor of their degrees, the only interview preparation they had to do after graduating was practicing how to ask: “Would you like fries with that?”. Even newly minted PhDs struggled in the job market. Academic openings were few. Now, these same philosophers are in high demand to research the ethics of data and technology.
The interest of so many philosophers in technology is also new. The social sciences and humanities were slow to recognize the importance of technology, either for what they studied or how they studied it. This week’s announcement of a £150m donation to fund a new humanities hub for Oxford plus an institute of ethics and AI, was hailed as a sign of how much has changed.
As a budding political scientist, I wrote my PhD about information technology in government in the 1990s. At the time this was a sad and lonely thing to do; neighbors in the London School of Economics dining room would cast their eyes around for a “normal” political scientist who could pass on some Westminster gossip. Then along came the internet, the first IT to be domesticated in our social, economic and political networks, followed by huge digital platforms based on processing massive quantities of data generated by human interactions, and the current AI revolution. I have a better time at dinner parties these days.
Data-intensive technologies — usually, and often inaccurately, labelled artificial intelligence — are having a profound effect on research and challenging the boundaries of the fields that feed the pipeline of technology specialists.
Those who are expert in manipulating huge data sets and complex networks — engineers, physicists and computer scientists — have traditionally trained on data relating to materials, cells, atoms, bits. When they enter the huge industry that has grown up around platforms of Facebook, Amazon, Google, most data they deal with relates to people. But the academic training ground for understanding human behavior is social science — including economics, anthropology, sociology and political science — the concepts and methods of which bore most engineers to tears.
These technologies provoke a whole raft of new ethical issues and dilemmas. They can reduce the transparency and accountability of business processes and decision-making, requiring frameworks to ensure trust. There are issues of privacy and rights connected with personal data. Machine-learning algorithms can introduce bias and discrimination. Resolving such issues requires an approach grounded in ethics and an understanding of what causes bias in the first place — traditionally the province of sociology.
Conversely, social scientists can be ill-equipped to research a society in which digital platforms are embedded. Such platforms offer exciting possibilities for my discipline, which has traditionally been based on surveys about what people think they might do (like vote in an election tomorrow) or what they think they have already done (but may not remember), rather than today’s huge banks of real-time data. But to use such data requires expertise that can only be provided by diverse research teams.
When the University of Oxford created my department, a multidisciplinary environment devoted to research and scholarship on the relationship between society and the internet — and now AI — it was prescient and brave. There could be an economic as well as a moral imperative for other bold moves.
The US and China have outstripped all other nations in the AI technology race, however many unicorns we claim. But the giants of Silicon Valley have not been so keen to welcome the philosophers. US president Donald Trump signed an executive order on “Maintaining Leadership in AI” in February, which barely mentions ethics at all. As for China, even the Trump administration has blacklisted Chinese AI companies over ethical concerns. As post-Brexit Britain searches for a comparative advantage, ethical AI could be part of the answer.
See also: Workplace Survival Skills in an Age of Automation