Universities need more than off-the-shelf AI solutions

6 minute read

This post was originally published on the LSE Impact Blog.

Excerpt

Following the release of ChatGPT Edu, OpenAI’s enterprise offer to universities, Daniela Duca assesses the landscape of AI adoption in higher education and the different and emerging AI options ava…

Last week, OpenAI unveiled ChatGPT Edu, an enterprise offering developed through a collaborative effort with institutions like Arizona State University, marking their first foray into sector-specific solutions. However, the increased token limits and enhanced security, while appealing, may not be enough to significantly accelerate the adoption of generative AI across higher education.

At present most universities likely fall into one of three categories:

Innovators

These trailblazers have already partnered with OpenAI, Microsoft, or developed their own custom solutions using open-source models. These include Arizona State University, the University of Michigan, the University of Tennessee Knoxville, UC Irvine, Harvard University, Northwestern University, Washington University, LSE and probably a few others.

Stewards

This group has been actively developing comprehensive guidelines as they explore institution-wide adoption. They’re utilising tools like Microsoft Copilot for administrative tasks and may be about to roll out solutions specifically for teaching and learning or research. Many universities, including Delaware, UT Austin, Cornell, Carnegie Mellon, and several Canadian institutions like Queen’s, Ottawa, Wilfrid Laurier, McMaster, Toronto Met, and Western, are in this category. The Russell Group institutions in the UK developed a series of principles, providing extensive guidance and resources on AI across the consortium.

The hmm… crew

Finally, many medium- and small-size institutions might be still pondering their options or simply waiting. The Educause AI landscape report reveals that a significant 11% of institutions (respondents) haven’t even begun strategising about AI integration.

So, which genAI flavour will campuses choose?

With ChatGPT Edu, there are now three major enterprise off-the-shelf options: Microsoft Copilot, Google Gemini, and OpenAI’s offering. The challenge for larger institutions lies in enabling seamless access across the entire campus. Microsoft and Google have made this relatively easy, but ChatGPT still presents a hurdle, potentially becoming yet another app to manage. However, its sheer brand recognition might prevent it from getting lost in the shuffle.

With ChatGPT Edu, there are now three major enterprise off-the-shelf options: Microsoft Copilot, Google Gemini, and OpenAI’s offering

A more intriguing option, in my opinion, lies in universities leveraging their research prowess and deep expertise in teaching and learning to build custom solutions. Open source models are becoming increasingly sophisticated, and readily available through various cloud platforms. The simplified building of retrieval-augmented solutions (RAGs), which allows Large Language Models (LLMs) to access specific information in real-time to answer user queries and can be used to create research assistants powered by an institutions own data, and advancements in agentic systems, which offer an alternative architecture to building generative AI assistants consisting of multiple LLM-powered agents, both contribute to making the development and use of open source AI systems a tempting prospect for institutions with the requisite expertise.

An open source approach aligns well with the ethos of universities and potentially offers more effective and tailored tools. However, it’s undeniably a massive undertaking for most institutions. One promising path forward involves consortia of universities collaborating on such tools and services, potentially led by the Innovators or established sector leaders, who have already built custom solutions. Australia, for example, has successfully implemented a similar system for research and data services, while the UK has led the way with shared cloud infrastructure. The recently announced Global Data Consortium also aims to bring universities together around the use of AI on student data. This collaborative model could help distribute costs and expertise, making this ambitious endeavor more feasible for a wider range of institutions.

An open source approach aligns well with the ethos of universities and potentially offers more effective and tailored tools.

However, it’s not simply security concerns and technical integration hindering enterprise adoption of generative AI in higher education. It’s also about familiarity. We haven’t quite figured out the best way to use genAI at the institutional level, given its inherent limitations.

What’s the point of GenAIs for universities?

A recent poll conducted on Sage’s Research Methods Community (Fig.1) reveals that 57% of social science researchers and faculty have not yet used generative AI, while another 13% are interested but haven’t taken the plunge. Only 12% report extensive use. This mirrors findings from other institutions: at the University of Baltimore, faculty are primarily experimenting out of curiosity, and at the Michigan Institute for Data Science, 70% of 92 faculty lack practical experience. Conversely, a survey 6,311 of German students shows that 63.4% have used AI tools for academic purposes.

A poll showing responses to Sage Methods lab survey question: in the past year have you used any AI powered tools or platforms to teach research methods? 57% answered no.

Fig.1: Sage Research Methods poll. 

While students may be embracing generative AI, the majority of staff and faculty remain hesitant. For some, it’s merely a flawed search engine producing unreliable results, while for others, it’s a tool for academic dishonesty. Neither perception is particularly helpful. To navigate this complex landscape, we need better analogies to guide our understanding and use of generative AI.

While students may be embracing generative AI, the majority of staff and faculty remain hesitant.

Finding these ‘use cases’ is precisely the goal of many EdTech companies. Over 100 EdTechs already incorporate generative AI models for audio, video, or text, and the past two decades have seen $22 billion invested in AI in education, with $100 million specifically for higher education tools. It’s inspiring to see the innovative solutions emerging, like Sherpa Labs and Anywyse, which aim to boost student engagement. However, they face the same challenge of unfamiliarity and, crucially, need to demonstrate a tangible impact on learning outcomes.

Early research offers mixed results. Nearly 100 educational papers have been published on ChatGPT alone, with a Stanford study suggesting AI chatbots haven’t increased overall cheating rates in schools. Another study confirms the benefits of instructors using GPT3 for positive feedback but highlights their limitations in providing constructive feedback for struggling students.

In courseware development, Acrobatiq’s AI-generated content has shown promise, but a lack of comparable historical student data makes it difficult to definitively quantify its impact. Similarly, the literature review landscape is evolving, with research databases adding retrieval-augmented generation, but the effectiveness of this approach remains questionable.

The potential of generative AI in education is tantalising, but current research still offers mixed results. We lack longitudinal studies, and the rapid pace of technological change renders many findings out of date. To truly understand its impact, we need more in-classroom experiments. EdTech tools are uniquely positioned to gather this crucial data, but collaboration between researchers and EdTech companies is key.

A silver lining?

Education non-profits are leading AI adoption, with higher education institutions gaining trust for responsible AI use. Collaborative initiatives like CANGARU, institutional AI initiatives, and reports from Coalition for Networked Information & Association of Research Libraries, and Ontario Council of University Libraries highlight proactive efforts. Ithaka S+R’s 19-university task force and EDUCAUSE’s collaboration with AWS further demonstrate the sector’s commitment to understanding and responsibly integrating AI.

While these collaborative efforts might initially seem slower, they will ultimately speed up adoption and reduce resistance. By prioritising effectiveness and value as defined by higher education institutions themselves, they will distribute costs, leverage expertise across the sector to avoid duplication and save money. Crucially, this approach will benefit the entire sector, supporting the necessary changes to existing academic processes, such as assessments, that must evolve in response to generative AI.

The current enterprise AI offers like Microsoft’s Copilot or ChatGPT Edu therefore still seem to require deeper integration within core academic processes. Ultimately, the widespread adoption of generative AI in higher education hinges on the sector’s ability to demonstrate the technology’s true value and tangible efficiencies, thus making it an indispensable part of its infrastructure. Whilst we shouldn’t all sign up for the hmm crew, the future of AI in higher education is by no means decided.