Artificial intelligence is here – we need to understand how to use it

Whether we like it or not, AI – and particularly generative AI – has arrived in higher education and in research, and it’s already changing the way things are done.

Students are using ChatGPT to help them with assignments, and researchers are experimenting with it to produce funding proposals. The most digitally savvy are using a range of AI tools to help with literature reviews, parse data, speed up and improve writing, and to generate presentations.

AI is already challenging established pedagogies and pushing academics to rethink how they teach and assess students. University administrators are worrying about what it might mean for quality and how to regulate it at institutional level.

As some have noted AI could open a third digital divide – this time where the difference isn’t access to the technology, or even the skills to use it, but access to the people who can help use it most effectively.

Embracing AI to understand its bias and navigate its use

Far from being the preserve of computer science and engineering departments, students and their teachers in every discipline will need to understand how AIs work in and on their world.

AIs could also transform learning positively – finally breaking the reliance on outdated forms of assessment.

It will be students who can understand and work with AI that will thrive in the jobs of the future. And far from robots replacing teachers, it is precisely the expertise and humanity of educators that are needed to help navigate this.

Universities and knowledge institutions must embrace AI, to understand, critique and control it. But our colleagues and partners – academics, researchers and others in universities and “knowledge institutions” across Africa, Asia and Latin America – can’t afford to leave it to their Global North peers to define the practice.

The power and dangers of AI are too great, the blind spots in its data and the biases in its algorithms too significant – especially when it comes to the knowledge and experience of the majority world – and the potential for harm to considerable. These gaps need to be filled, biases surfaced, and harms identified and prevented.

Students, educators and researchers need to understand AI for themselves, and to make their own decisions about when and where to use it, to identify how it can serve their own needs.

Careful exploration, leveraging the possibilities

We need to understand these changes, build an awareness of them into our programme, and assist our colleagues to navigate them too.

AI is nothing like what has gone before – the dangers and the potential are far greater.

The only way to bridge that gap is to carefully begin to explore it. To test the tools for ourselves, and by learning be better placed to assist others to do the same. We started to do that in August, convening an event with East African partners to explore what generative AI might mean for teaching and learning.

What do educators need to know about AI to enable them to guide their students better? Could AI tools become professional development tools for over-stretched lecturers, helping them to update curricula, or identify new teaching activities and assignments to foster critical thinking? Could expertly prompted LLMs help deliver more personalised learning and feedback to students? And what do both they both need to watch out for as they explore those uses?

Two of our AuthorAID stewards have already written about their own experiences using AI and exploring its use in their research and in academic publishing.

Learning together and learning fast

Our hope, together with our East African colleagues, is that we can develop a new learning programme so that we can learn together, and embed what we learn in the next phase of our TESCEA programme.

We’ve also begun to prototype new ideas for our AuthorAID community of early career researchers– we’re keen to see if AI tools could enable us to assist more early career researchers to navigate research funding, data analysis, and publishing. Could it surface useful advice more quickly, synthesise the community’s knowledge, and enable members to spot new connections?

These are early steps, and we’re finding our way carefully. Over the coming months we plan to convene further conversations, secure the resources we need to begin to experiment practically, and work with our partners to learn together.

INASP has a long history of careful and considered exploration of digital technologies in research and higher education – recognising their shortcomings but leveraging their possibilities to advance learning and support change. The challenge and opportunity posed by AI – available on a basic smartphone with a decent connection — is greater than anything we’ve seen, which is why we know we can’t wait.

If define AI as something with the potential to positively improve HE and research – not simply a way to cheat the learning process – then there’s a chance to actively design for and to secure its benefits, rather than to be pushed into a defensive stance, trying to counter its downsides.

These may be early days, but they are moving fast. What we do now – collectively, as a community –will be critical in deciding whether AI becomes that force for good. Cautious but clear-sighted leadership is critical.

 
Share: