The AI Skills of Social Scientists

The AI Skills of Social Scientists

I was lucky enough to be asked to speak on panels at this year’s APT conference and the University of London in recent weeks, focusing on the adaptations that higher education institutions need to make to a world of ubiquitous generative AI. Based on the questions from audience members at those events, a central theme of educators’ worries is: what AI skills do our students need going forward?

Implicit in the question is the worry that some of the skills that universities have foregrounded in their pedagogical approach might be in dwindling demand or imminently automatable. Structuring an argument, compiling thoughts into coherent sentences and compelling paragraphs, writing about technical subjects with accuracy and precision, and even conducting literature reviews – all seem, if not wholesale outsourced to AI in the near future, at least likely to be simplified and accelerated by generative systems. While there may be routes to (at least temporarily) skirt the use of generative AI in assessments of these skills, that is no tonic to the reduction in demand for and increase in computer-aided prevalence of those skills.

These skills will not vanish from relevancy. Even in the worst case scenarios for fans of the entirely human-written authorship approach, there will be a niche for the proudly anachronistic. Just as there is a market for ‘hand-made’ in crafts as a mark of quality, so too there is likely to enduringly remain an appeal to the stridently ‘entirely human-authored and edited’ essay. Many of the same virtues that appeal about hand-made goods – among them idiosyncrasy, exclusivity and a sense of craftspersonship that offers authenticity – will appeal in human-authored books and articles.

The history of the automation of creative endeavours should reinforce the belief that writers and artists should expect to retain some role, even if generative AI can consistently and cheaply match the quality of anything they can produce. Arguably, we first automated the performance of music in 1860, when Édouard-Léon Scott de Martinville recorded a soundbite of ‘Clair de Lune’ on a phonautograph. Through gramophones to Walkmen, iPods and streaming music platforms, we no longer need to acquire musical skills or to hire a band of musicians if we want to hear our favourite songs or to provide ambiance for a venue. But even though we can all listen to Elton John on demand without need for him or a cover artist, hundreds of thousands still attended his last performance. There’s something about a human touch in the real world that we can value above the automated equivalent. Scott Santens discusses this phenomenon in his metaphor of the Jazz Brunch society. In New Orleans, with brunch cafes competing for custom, it’s not enough to have jazz music on the sound system – venues that employ live musicians outcompete, and the best can charge a premium. We should perhaps expect a similar future in publications and publishing houses. From the same perspective, the initial phase of the automation of education came with the printing press (if not the book more generally). With the creation of textbooks, it became unnecessary to perform original research or learn from a teacher directly to acquire new information. But just as live music is more edifying than a recording, a personal teacher can be found more effective than a textbook (and, if the past two decades of educational YouTube, MOOCs and online encyclopedia are representative, more effective too than on-demand access to the majority of the world’s information – even with the accompaniment of expert commentary).

But should these skills become relatively devalued, there are others embedded even in traditional coursework that remain significant: critical thinking and critical appraisal of evidence, creativity and originality, systems thinking and approaches to complex problems, the ability to make connections that are unexpected, underappreciated or unconventional. There’s an extent to which generative AI can simulate the outputs of these skills, but at the very least creative prompting, editing and combination of AI tools are techniques of increasing value. These skills are not entirely replaceable, though. Even with advances that decrease hallucination, critical thinking and appraisal of evidence would remain vital to gaining anything from those AI-authored texts. Creativity and originality would be necessary if we want to get anything much more valuable out of running these systems than what was in the training data. A systems thinking approach which allows us to address complex problems is not yet embedded in language models – it involves precisely that skill of making and following connections. It is also perhaps the approach most valuable when it comes to the non-technical problems that machine learning systems are least adept to solving: ones with complex social, political, economic and philosophical embeddings – climate change, inequality, and the impact of AI on society itself.

But that doesn’t mean we stick to our guns as educators. There are skills that we undervalue at present that will be rendered more potent and valuable in combination with generative AI. We should steer into those skills, not only because it will increase the value of the education we provide, or be of the best service to our students, but because it will build future generations of scholars, policymakers, decision-makers and creators who are better equipped to solve problems and create better systems. As an initial bowshot, I offer these three:

1. Critical social science of AI

Perhaps the most urgent AI skill that social scientists need is the ability, inclination and authority to apply social scientific methods, ideas and critical approaches to AI – whether to the systems themselves, the power structures they are embedded in, or the organisations who create, curate, promulgate and regulate those tools.

In that respect, many of the most primary and pressing AI skills are the skills of their social scientific disciplines themselves – but what is potentially missing in our current standpoint are enabling approaches and disciplinary philosophies which invite and encourage students to get into the AI space and avoid yielding that discussion to the disciplines which have taken primary responsibility for AI development (computer science, mathematics, statistics and to some extent philosophy of technology), and to private companies. AI advocates and developers have clamoured for regulation, wrung hands about ethics, and pondered a little politics and policy. Their technical knowledge in a fast-paced field has pushed out many social scientists who feel uncomfortable in that space or unqualified to be there. That is untrue: welcome or not, social scientists belong in those conversations. Their skills are vital.

The ability to develop or even employ the AI systems is, to some extent, a secondary concern to that. We must not refuse the opportunity to bring students into critical engagement with AI systems for any reason – whether that’s students’ knowledge gaps or our own, concerns about the use of the technology in assessment, squeamishness about what generative AI means for the future of our disciplines or our sector, or a lack of existing canonical works in our fields. The worst option isn’t actually to try to ban generative AI. It’s to try to ignore it. 

When we are asked ‘How are we embedding AI skills into our social science courses?’, fundamentally the most urgent way in which we can do this right now is by bringing a range of social scientific disciplines to bear on the role of AI. We shouldn’t miss the opportunity to talk critically about AI, interrogate its social impacts, and invite students to employ critical skills with respect to AI, just because AI isn’t the central topic of our courses. We don’t need a special AI module in every degree programme. We do need AI to be a topic across degree programmes.

2. Authorship and Individuality

The second major ‘AI skill’ we can teach and can start to value more significantly is to know what you are capable of in combination with a given AI system that might be available. Knowing the contours of those systems and the affordances they provide is a start to that. But it is far from the whole project. In a world in which anyone can generate a half-decent report, essay or paper on a technical topic going forward, knowing how to do that generation and what tool to use is a prerequisite – and not a particularly interesting one.

What’s far more relevant is knowing how the skills, ideas, background, context and perspective that you offer, as an individual with a set of experiences and training, allow you to do things with the tool that others cannot. How do we write something different, unique, interesting and distinctive with such a tool?

In the academic domain, much of this will be about those skills of critical connection-making, following threads and noticing patterns that others might miss without your knowledge, and that others cannot appreciate or understand without your context and experience. We hope that much of the grunt work of academic writing will become increasingly straightforward, allowing more time for the polish of creative authorship. Given that, we can stand to think about what the creative writing skills for academic work are. In the creative space, authorial voice has to be a priority. To be clear, that authorial voice does not have to be something developed solely by the individual. It can be collaboration with an AI system that allows for stylistic and personal choices to be embedded throughout written pieces.

This is an area in which academia may have mis-stepped lately, broadly speaking. We have habitually flattened out the idiosyncrasies of written style and homogenised our approach to writing in favour of disciplinary standardisation. There are benefits to this, but they will be diluted by generative AI. A move towards more endorsement and ultimately encouragement of authorial voice in academic writing would offer more advantages than ever (and be more plausible than ever) in this world.

3. Leadership and Collaboration

The third skill is broad, but critical in the long term for effective human-AI cocreation that goes meaningfully beyond what either can produce in isolation. Leadership and management skills are central to working with AI and treating it appropriately. Collaborators with AI systems need techniques for effective collaboration – and these might diverge from the most effective collaboration strategies amongst human coauthors. They need managerial skills that will help them to know how, what and when to delegate tasks to AI systems, and how to make the best use of work produced by others, including AI others.

Five years ago, Nicky Case wrote How to Become a Centaur, which I still return to when I think about how to work alongside AI systems or with AI tools. This pivots around the story of the chess world champion Gary Kasparov being defeated by Deep Blue – not the games he lost, but the game he created in response: centaur chess. In centaur chess, rather than a human grandmaster competing against a chess engine or against another grandmaster, chess becomes a team sport. In a centaur chess competition, any combination of players and engines can work together to compete against other teams. A grandmaster with a supercomputer can face off against a single chess engine, or against a team of three grandmasters working together. The surprise winners of Kasparov’s first centaur chess competition were a group of amateur players with access to several weak chess engines. They defeated the strongest chess engines of the day and the strongest grandmasters alike. They won because they had the best process for working together, the best understanding of their strengths and weaknesses and the capabilities and limitations of the engines they used. They played AIs off against each other and pooled their different capabilities together effectively. That’s a management strategy, ultimately. Curating the processes that allow us to bring our individuality into the outputs of AIs, combine those outputs together in heretofore unseen ways, and understand how to get the best out of each contribution: those are the leadership skills for working collaboratively with AI.

When we think about how to build effective “centaurs” who can partner with AI systems effectively, the priority is to think about what the best of the human input is, and what the best of the AI tools’ capabilities can be. A great mythological centaur pairs the battle-savvy and archery ability of the human top half with the speed and manoeuvrability of the equine lower half. The worst case scenario is to badly automate what we’re good and refuse to take advantage of the strengths of the generative systems, like a horse-headed hybrid stumbling about on human legs. In other words, when we’re thinking about skills for the future, we need to think Sagittarius not Bojack Horseman.


This post was written entirely by a human. No generative AI was used at any stage in the composition process. Not because that would have been bad, you understand – just chose to write it entirely myself.

Latest update: 14/07/23