top of page

Ai + the Student

Updated: Jul 8

… our vision for students in an Ai age



Introduction

In November 2022, ChatGPT3 hit schools like an asteroid from space. Students latched on to its potential to do their homework and assessments and started a cold war with teachers trying to block its use. But trying to keep artificial intelligence (Ai) out of education was like King Canute trying to hold back the tide. Futile, and ultimately unhelpful.


Ai is a consequential moment in education.  It presents challenges and opportunities for schools that require a step-change to navigate and leverage it successfully.


This article is the first in a series that delve into the potential of Ai to shape the future of education.  It explores possible goals for rangatahi (young people) in an Ai age – to value original learning, to put learners at the centre of education, to learn how to leverage Ai positively, and to identify and mitigate the misuse of Ai – and some of the positive ways students can us Ai in their learning.


Subsequent articles explore how Ai will other affect levels in education – teachers, schools, and the wider system, and how they can collaborate to support innovation in the use of Ai in education.


The asteroid has landed.  It’s time to reimagine education in the Ai age.

 

What is Artificial Intelligence?

Ai refers to the simulation of human intelligence and behaviours by computers. While the underlying concept has historical roots, such as Mary Shelley's "Frankenstein," the term 'artificial intelligence' was first used in 1956.


Development has accelerated in the last decade, driven by the exponential growth in data availability and processing capabilities, giving rise to advanced Ai software like OpenAI's ChatGPT and Anthropic's Claude.


These are examples of generative Ai, sophisticated computer programmes capable of generating their own content and a wide variety of tasks such as writing text, analysing data, summarising text, writing computer code, creating images or video, and conversing with people.


Behind the new Ai are large language models (LLM’s), computer programmes that mimic the human brain’s ability to learn from data and make decisions. They use massive computing power to process vast datasets like Wikipedia, to recognise patterns and build operational rules such as how to create sentences, interpret text or draw images and videos.


The rules are refined through training which corrects mistakes and adds extra layers of complexity, such as understanding the meaning or context of a question.  The Ai uses these rules to generate words or images that simulate human understanding and creativity, similar to how films and video use still pictures to simulate motion.  Is Ai intelligent or sentient?  Not yet.


Public awareness of the power of generative Ai exploded in 2022 when OpenAI released ChatGPT.  It was the first publicly available generative Ai based on LLM’s, and its breadth of knowledge and comparative sophistication surprised many.

In education, it was quickly adopted by students to outsource their learning work, i.e.  homework and assessments.  Why write an essay when ChatGPT can do it for you and possibly write it better?  Teachers responded by trying to ban it, trying to detect its use themselves or by using ChatGPT itself or other software such as Turnitin with varying success.


Since then, Ai development has accelerated.  Ai is now part of education’s fabric.  It’s advancing rapidly and its impact will grow, as it is in other industries.


Considering this, we must be clear about what we want for students in an Ai age. This article proposes four goals:


1.     Valuing original learning

2.     Learner centricity

3.     Learning to leverage Ai positively

4.     Learning to identify and mitigate misuse and risks of Ai.

 

Valuing Original Learning

Firstly, we want rangatahi to understand and value their own learning, i.e. we want them to appreciate the importance of original knowledge, thinking, creativity, and effort.


The concept of original knowledge is becoming increasingly significant in our era of information overload and misinformation.


A recent survey by the Canada Foundation for Innovation revealed a concerning trend: about 41% of young Canadians aged 18-25 displayed science hesitancy, preferring to trust social media sources over traditional ones. It’s crucial for young learners to not only trust but also critically engage with original knowledge. They should be encouraged to question and expand existing knowledge, rather than succumb to cybercynicism that rejects any form of absolute or objective truth.


We want students to value their own, original thinking and creativity. Currently, most students view outsourcing their work as cheating. However, with the growing pervasiveness of Ai, this perception may shift, possibly normalising the use of Ai in academic work. Traditional responses like "you're only cheating yourself" are becoming insufficient. It’s increasingly important to encourage students to value their own thinking and creativity, rather than defaulting to Ai solutions. This involves creating an environment where original thought is encouraged an incentivised.

Along with this we want students to value original effort.


In her book "Dopamine Nation," Stanford University's Anna Lembke discusses the interplay between effort and the pleasure neurotransmitter dopamine. Dopamine enables us to feel pleasure and has been essential in motivating humans to perform crucial tasks. However, Lembke suggests that effort is also necessary for dopamine to be effective, as pleasure and effort require each other.


With Ai, students can complete tasks with minimal effort and become cyberslackers. This affects their learning and their mental wellbeing, so we must design learning activities and assessments that incentivise and support students’ original effort in their learning. 

 

Learner Centricity

It’s estimated that the global Education Technology market will be worth approximately $700 billion by 2030, growing by 14% annually.   Global corporates such as Squirrel Ai, BYJU, Khan Academy, and Duolingo are pivoting to Ai and seek prompt returns.


Development, therefore, has focussed on low hanging fruit such as maths or exam preparation via sophisticated variations of rote learning.  But great teachers go beyond their subject content to develop students’ broader ability to think and understand.  Education must go beyond just knowledge transfer to also develop broader ‘student-centric’ skills such as communication, social, and critical thinking skills.


This will be challenging with a still nascent technology.  Khanmigo, Khan Academy’s Ai, is designed to work out answers to questions itself and then compare these to the students answer to mark it and give feedback and feedforward.  This is clever and works well for teaching maths, languages, or coding.  Would it be able to debate how many angels can dance on a pinhead in a philosophy class?


Current Ai is ‘unimodal’, i.e. it focuses on one learning outcome – the transfer of knowledge. We want it to be ‘multimodal’, able to achieve multiple learning outcomes.  Creative writing, design, painting, evaluation, critical thinking, social justice, cultural perspectives.  Using the Khanmigo approach, which is smart, we need Ai to imagine multiple different possible answers and perspectives, some unrealistic, to provoke and extend students’ thinking and creativity.


A early criticism of ChatGPT was its inaccuracy or hallucination.  It made up facts and citations.  But may, as Sam Altman suggests in an interview of Hardfork, we do want Ai to be able to make things up, to imagine, but to know when it’s doing so and why.  Then it could, safely and transparently, extend students’ thinking.


In year 13 economics, for example, students study two opposing market structures – perfect competition and monopolies.  In reality, neither exist in a pure form.  But studying the theoretical helps understand the real.


We want human teachers who go beyond the immediate content to get students to consider ‘what if … why … or would it be possible?’  If Ai is to teach, we want it to do the same.

 

Leverage Ai Positively

Thirdly, we want students to learn how to use Ai positively and appropriately. 


Students first used Ai to outsource their learning work, i.e. homework and assessments.  A natural response from teachers was to try and prevent this in the belief that students stopped learning when they use Ai.  However, trying to catch students using Ai is time-consuming, especially if students put Ai-generated text through other Ai tools such as Quillbot.


Students live in an Ai age and will graduate into an Ai-rich work environment.  We do them a disservice to pretend that Ai doesn’t or shouldn’t exist. Instead, we must teach them to use Ai positively, of which there are multiple opportunities if we’re prepared to think creatively, such as those listed in Table 1.

Learning to use Ai positively also includes recognising its potential biases and limitations, and also identifying situations where Ai use may not be appropriate, such as in certain social contexts where human empathy and understanding are paramount.


Teaching students to use Ai positively and selectively requires more than leaving them to use it whenever they want, without guidance.  It requires structured guidance that is well communicated and incentivised, an issue explored further in the next article – Ai + The Teacher.

 

Identify and Mitigate Misuse of Ai

Fourthly, we want students to identify the misuse and risks of Ai and learn how to mitigate them.  Rangatahi are growing up in a crazy, complex world that holds risks for them at and beyond school.


Propaganda, fraud, and theft aren’t new, but Ai has made it harder to detect and combat them. An example of this is ‘deep fakes’, Ai-generated images or audio that are so realistic, the average person struggles to distinguish them from real content.  


One website, Character.Ai, uses this technology to bring famous people such as Elon Musk or Adolf Hitler ‘alive’ online as digital Ai avatars that users can chat to. Political parties are using it in their campaigns. The NZ National Party recently used Ai generated people in its campaign ads.  The US Republican Party used Ai to create highly realistic but fake ads to scare voters out of supporting Jo Biden. 


While some of these instances might seem benign, there's potential for more harmful uses, such as websites promoting extremist views. Imagine a website suggesting Hitler is alive in South America and wants to talk with students about the merits of the final solution and Nazism.   It's vital for students to learn strategies to identify and challenge Ai misuse.


Another example of Ai misuse is disinformation and misinformation. A recent study by Imperva found that 47% of internet traffic consists of artificial bots, two thirds of which are bad bots deliberately spreading false information. The US State Department determined that Russian troll farm ‘Internet Research Agency” created thousands of social media accounts in the 2016 US presidential election,  purporting to be Americans and promoting events to support Trump and oppose Clinton. We need to teach students skills to identify and question such content and how to access trusted content.


Data promiscuity is also growing. Today's youth show increased awareness of data sharing and privacy, but the security challenges are multiplied by Ai. Instances like the Cambridge Analytica scandal, where personal data was used without consent, demonstrate the risks.  In New Zealand, the Disinformation Project found that Labour, National, ACT, and the Green Party tracked visitors to their websites to record their online activity without their knowledge or consent in the recent general election. More than ever, students need to know how to protect their data and privacy.


Another example of misuse is scamming which cost kiwis over $20 million in 2022 and is expected to grow worse.  We need to upskill our rangatahi, as well as build and legislate for better online security.


Alongside teaching students how to use Ai positively, we need to educate them about the potential for its misuse and how to navigate these challenges, ensuring their safety and well-being in an Ai age.

 

Conclusion - Goals for Education in an Ai Age

8 AD – Pygmalion by Ovid. 1816 – Frankenstein by Shelley. 1939 – Robbie by Isaac Asimov.  1950 – Turing Test.  1956 – “artificial intelligence” coined. Ai isn’t new as an idea, but its realisation through ChatGPT in 2022 a game changer in education.  Students latched on to it and began a cold war with teachers trying to block it.


However, teachers must rather spearhead innovation in Ai, with the support of school leaders and the wider system.  The way forward isn’t to build bigger walls or ignore it.  If teachers and schools step back Ai’s development is left to other interests such as stakeholders.


Teachers must embrace Ai and lead its innovation, adapting curriculum and pedagogy accordingly, with support from school leaders, the wider system, and other stakeholders in a ‘connected innovation’ framework that is elaborated in subsequent articles.


Education, especially public education needs to pivot and adapt quickly to Ai. Let's connect and support our teachers to innovate.

 

 

Ko ia kāhore nei i rapu, tē kitea

Those who don't seek, won't find.

 

Whāia te mātauranga hei oranga mō koutou.

Seek after learning for the sake of your wellbeing.

18 views0 comments

Recent Posts

See All

Comments


bottom of page