Generative AI

Artificial Intelligence (AI) is likely to have an increasing impact on education systems over the coming years. Te Tāhuhu o Te Mātauranga is developing policies and advice about this to ensure the system is well prepared. This page provides guidance to support teachers and school leaders with these new tools.

What is generative AI?

Generative Ai includes a range of tools that can create new content, which have been trained on huge sets of data. Tools like ChatGPT and Google Bard are examples of generative AI that are built on Large Language Models (LLMs). These tools can answer questions, complete written tasks, and interact through language in a human-like way. Other generative AI tools can produce audio, video, visual, or code outputs. Different types of AI tools are already widely used to do things like filter spam emails, make recommendations on websites, or help with navigation apps. The common feature is using data to make predictions.

AI tools have become a major topic in education, technology and wider society over the last year. LLMs, including the widely known generative AI tools such as ChatGPT, are made by technology companies using complex mathematical frameworks. LLMs generate text responses that can be very plausible, and due to the intricacies of their design, even identical prompts can yield different outputs.

The landscape of LLMs is diverse and expanding, with options such as Chatsonic, Google Bard, and BingAI available for free public use. Companies are also integrating LLMs into everyday software tools, including search engines and text editors, enhancing their functionality with AI capabilities.

As people learn about how these tools can be used in real-life contexts, there is emerging evidence about how they can help with various tasks at work, as well as in everyday life. The best way to learn about this is to take the time to try these tools yourself, but before doing so it's a good idea to know how to do so safely and within the rules.

Using generative AI in kura and schools

Some educators are already using AI tools effectively and safely to support teaching and learning, and we will be sharing examples of these as we learn more. There are some examples and advice about this linked to below.

Schools should be considering whether exisiting policies around privacy, digital technology use, and data and information security are adequate for these new tools, or whether to develop bespoke policies for AI tools.

PLD providers and tertiary institutions are offering short courses and learning opportunities for teachers about genAI tools, both general introductions and how to use them for specific learning areas or approaches.

An introduction to the role of AI in classrooms and schools – Education Hub(external link)

ChatGPT resources – Kit Willet (Selwyn College teacher)(external link)

Practical AI for teachers and students(external link)

Considering the issues below will mean that you will be keeping yourself and your ākonga safe as you develop expertise with these exciting new tools. 

So how might we use AIs like ChatGPT in schools?

As we have seen, there are opportunities and risks in harnessing AI like ChatGPT in the classroom.

But, through deliberate and discerning use by teachers there are opportunities to teach learners critical literacy, particularly supporting ākonga to question the accuracy of what they read as well as recognising bias. The curriculum is clear about the importance of critical literacy learning, including being literate in a digital space. 

Although children and young people may not be able to access ChatGPT themselves, teachers can generate texts and use these with ākonga to support the development of their critical literacy skills. Teachers could also use a series of texts to support understanding of the effective use of prompts.

Many workers in various industries are finding that AI tools can work like a personal assistant to support or speed-up various tasks. For teachers, AI tools can help with things like making quizzes or learning tasks. The key caveat of this is, it's important that you verify the content is accurate before using it.

Be aware before you use

AI tools provide great opportunities for teaching and learning, but educators need to be aware of the limitations and risks before they jump in.

These tools are still experimental and the risks and potential harms of these are still not well understood. New Zealand’s Office of the Privacy Commissioner has released expectations for people and organisations using generative AI, which principals and teachers should be mindful of.

Generative artificial intelligence – Office of the Privacy Commissioner(external link)

It is worth being aware of the guidance to the New Zealand public service and considering it when making decisions about genAI in your school.

Understanding the benefits of GenAI to the public service – NZ Digital(external link)

As noted above, there are concerns about genAI and intellectual property. The onus is on you to ensure any generated content you may use does not infringe on other’s intellectual property or copyright law.

Another key point is that it is important to be clear that you have permission from an appropriate person before using AI tools in your work. The Office of the Privacy Commissioner recommends that senior leaders of organisations approve all use of AI tools.

Lack of knowledge of non-dominant languages and contexts

Generative AIs like ChatGPT have been trained on content that can be freely accessed on the internet, and most of this content reflects contemporary, dominant cultures and languages.

This means that the tools may not be able to provide results that reflect indigenous knowledge, and in the content of Aotearoa New Zealand, will likely be weak on Mātauranga and Te Reo Māori.

The tool may produce results with errors and biases which can reinforce existing prejudices and inequities.   

Speaking my indigenous language with new AI – Te Ao Māori news(external link)  

Māori culture and language observations with ChatGPT – Dr Karaitiana Taiuru PhD, JP, ACG, MInstD, RSNZ(external link) 

Age restrictions on use

As with any digital tool, teachers must abide by the terms and conditions of the AI tools they want to use with their learners.

For instance, to use ChatGPT you must be at least 13 years old, and if you are under 18 you must have your parent or legal guardian’s permission. The terms of use for ChatGPT and any services using its API are clear that you must not send it any personal information of anyone under the age of 13.

Requiring ākonga to use ChatGPT or another AI tool must be consistent with the terms and conditions of the tool.  

Terms of use – OpenAI(external link)

Avoid using personal data in genAI tools

The safest option is not to input or use personal data in genAI tools. The Privacy Commission recommends that you must think about privacy if you are going to use AI tools well. It also advises that:

The Privacy Act 2020 applies whenever you collect, use, or shape personal information. As a rough guide, if you can say who information is about, it is personal information. That includes information like a name, address, contact details, or photographs of a person.

There is a risk that personal information entered into genAI tools is retained or disclosed by the genAI provider. The responsibility of complying with the Privacy Act 2020 lies with schools.

Large language models are unreliable

Large language models compile text convincingly, but are more like a very big fancy autocomplete tool than something 'intelligent' in the way we normally describe it.

They can invent facts and details, because services like ChatGPT are designed to create answers that are grammatically correct compilations of text, based on predicting what word should come next in a sentence.

OpenAI, the developers of ChatGPT, caution that ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers and that fixing this issue is challenging. 

This means that all important outputs from Large Language Models should be verified by a person who knows about the content and will be able to recognise factual or interpretative errors.

Introducing ChatGPT – OpenAI(external link) 

An introduction to the role of AI in classrooms and schools – Education Hub(external link)

Don’t use AI to make decisions about a learners' work

Using ChatGPT or similar tools for marking work can look promising, but teachers should be cautious of this.

Without understanding the basis for the judgements (i.e. seeing inside the algorithm), this can be unfair and discriminatory. It is the responsibility of teachers and the school to make final decisions on learners’ work. Also, because of the way these tools ‘invent’ answers, sometimes they can be simply wrong.

AI systems trained off the internet have not seen enough work by children or young people to have a good understanding of what is appropriate for or expected of young people.  

Can ChatGPT provide feedback? – No More Marking(external link)

NCEA and ChatGPT

NZQA have provided advice specifically about reducing the risk of cheating in NCEA using ChatGPT or other AI tools.

Chat Generative Pre-trained Transformer (ChatGPT) – NZQA newsletter February 2023(external link)

Using AI checkers for assessments

As noted above, teachers should not rely on AI tools to make decisions about student work.

There are many AI detecting or checking tools available, some of which are marketed heavily to teachers. These may have some value, but anyone using them needs to be aware of the risks and caveats.

AI plagiarism checkers are not fool proof. AI detection tools will struggle to keep up with the ability of AI to draft text in the style of a student. AI detection tools can also generate false positives, which means they identify something as AI generated when it is not.

False positives and true positives cannot be told apart without using a different method, such as discussions with the student. This means you should not rely on AI based detection tools to judge whether a student has breached assessment conditions.

Teachers are best placed to make judgments about the authenticity of ākonga’s work and should employ a range of strategies to support such judgements. School assessment policies should be clear about these strategies and the processes to follow if a teacher suspects unauthorised use of generative AI.

Understanding false positives within our AI writing detection capabilities – Turnitin(external link)

A Hitch in Accurate Detection of AI-Written Content – Psychology Today NZ(external link)

Turnitin says its AI cheating detector isn’t always reliable – The Washington Post(external link)

Although children and young people may not be able to access ChatGPT themselves, teachers can generate texts and use these with ākonga to support the development of their critical literacy skills. Teachers could also use a series of texts to support understanding of the effective use of prompts.

Last reviewed: Has this been useful? Give us your feedback