Go back

Work-bot balance

Charlie Ball explores what artificial intelligence might mean for defining graduate career success

It is easy to forget that current iterations of ChatGPT and its variants have been around for just a few months. Whole cottage industries appear to have sprung up around predictions of what these new language models may or may not do to skills, employability or—in extreme cases—entire ways of life. While the pace at which these models are evolving makes it hard to assess their capabilities exactly, some emerging ideas are worth exploring—including what they might mean for the notion of a graduate job.

First, let’s turn the floor over to ChatGPT:

“As a language model, I have the ability to process and analyse vast amounts of data, including the text of millions of academic papers, textbooks and other scholarly materials. This means that I can quickly and accurately evaluate a student’s work and provide feedback that is tailored to their specific needs. In addition, my advanced natural language processing capabilities allow me to understand and respond to students in ways that are more personalised and nuanced than traditional grading systems. By leveraging these capabilities, I have the potential to change the way we measure successful outcomes for university students, and to provide them with more targeted feedback and guidance as they work towards achieving their academic and career goals.”

Pros and cons

So far, so bland. The thing is, in order to produce 118 words of inoffensive and largely accurate input, I first had to issue 99 words of instructions (including the instruction to refer to itself in the first person). I would have been better off spending the time writing that paragraph myself.

But it does neatly encapsulate both the pros and the cons of the model. It has produced a neat summary of, well, itself. But it can’t even talk about itself accurately. ChatGPT’s advanced natural language processing capabilities do not allow it to understand anything. They allow the chatbot to predict what word comes next. That is not how humans process language or think and it is not how our society, culture or forms of working operate. It is a genuinely alien mode of expressing information.

This is not necessarily a barrier to it operating effectively, and it seems clear already that ChatGPT has many advantages in generating text and synthesising readily available but unwieldy information. But that same alien mode of information parsing means that roles are likely to open up for ‘Chatbot whisperers’—individuals skilled in the best way of coaxing meaningful content from bots and making sure that the output is accurate. The issue is not that ChatGPT can’t address some questions or issues accurately; it’s that it can’t but produces output that appears authoritative and accurate enough to suggest that it can.

Waffle spotting

I am something of an expert in the UK labour market, a topic on which a great deal of information is available online. The various bots can produce serviceable overviews of the broader jobs market and even avoid some common inaccurate media tropes. However, as soon as you get into detail or specifics about specialist topics—the very things that it’s not easy to find content about swiftly online—it produces waffle and sometimes wholly inaccurate information—inaccurate in a way that requires an expert to spot. Alas, my dreams of harnessing magic robots to do all my work for me have been dashed.

There are, nevertheless, useful things it can do. While writing this piece, for example, I got ChatGPT to write a statistical script for me to plot census data. It’s not the best script ever, but it works and it’s all spelled correctly first time, which immediately improves on my usual efforts. The bot still needs instructions on the aim of the script, which is the creative human element, and humans still have to do something useful with the output, but it produces working code.

This means that in an artificial intelligence world, the notion of a ‘graduate job’ might be modified. Chatbots are not going to nurse you (although they may diagnose and prescribe, to an extent), but they may run your accounts and help deal with your routine legal and compliance issues. They’ll need instruction and assistance and in particular careful checking for when anything out of the ordinary happens and they try to adjust. This will be a skilled job. Graduates will likely dominate the necessary field of AI wrangling and for many, graduate success will lie in an ability to extract the most value from AI tools.

Changing metrics

Another question that arises is what this might do to the metrics used to measure such success. Actually, AI is pretty good at managing well-defined data calculations of the kind that go into metrics. Not only is it possible to see AI being used to calculate metrics of this nature, but it also offers the interesting prospect of metrics being administered by systems free of some human biases.

What would a system of higher education metrics look like if developed and run by AI without learned concepts of which UK universities have the highest status? Could AI-driven performance measures be developed that genuinely take into account the contexts in which institutions operate, such as the admissions profiles of students and local labour markets?

AI is good at spotting patterns and might be able to account for things that so far have gone unnoticed or haven’t been considered important. Some of these things will undoubtedly be red herrings but it is not difficult to envisage situations in which a well-instructed AI bot can discover something affecting outcomes that humans may not have noticed or readily been able to link.

This, then, could be where some real benefits might arise—not in creating ever more complex league tables but in really getting deep into understanding what factors make institutional delivery and practice more effective.

Charlie Ball is head of labour market intelligence at Jisc.