Your Learners’ Data Doesn’t Belong in ChatGPT

Learners’ Data Doesn’t Belong in ChatGPT

I recently heard a story about a teacher who’d started using ChatGPT to write her learner reports. She’d paste in the children’s names, marks, attendance records, and behavioural notes, and the AI would generate beautifully written, personalised report comments in seconds.

She was thrilled by it. I was impressed, and horrified!

Not because AI is bad. It’s genuinely remarkable, and it’s going to transform education in ways we’re only beginning to imagine. But there’s a difference between using AI smartly and handing a child’s personal information to a tech company on the other side of the world without a second thought.

And right now, that’s exactly what’s happening in schools across South Africa.

What actually happens when you paste data into ChatGPT:

Here’s what most people don’t realise. When you type something into ChatGPT, Claude, Gemini, or any other AI tool, that data doesn’t just disappear after you get your answer. It gets sent to servers, usually in the United States, owned by the company that built the tool.

Depending on the tool and the plan you’re using, that data might be stored. It might be reviewed by the company’s staff. And in some cases, it could be used to train the next version of the AI model. That means a learner’s name, their marks, their disciplinary record, or their home situation could become part of a dataset that millions of people interact with.

For a casual question, such as “help me write a recipe” or “explain photosynthesis”, this doesn’t matter much. But the moment you start pasting in children’s personal information, the stakes change completely.

Children’s data is not ordinary data. South Africa’s Protection of Personal Information Act ( POPIA) treats children’s personal information as a special category that requires extra care. Schools are responsible for protecting it. Not just from hackers and data breaches, but also from being shared with third parties without proper safeguards in place.

When a teacher copies a class list into ChatGPT, they are transferring that data to a company in the United States. There’s no contract between the school and OpenAI governing how that data will be handled. There’s no guarantee it won’t be stored or used for other purposes. And the school almost certainly hasn’t obtained consent from parents for this specific use of their children’s information.

Under POPIA, that’s a problem.

And if you think this is just a South African concern, it’s not. The EU’s General Data Protection Regulation (GDPR) has even stricter rules around children’s data, and the new EU AI Act specifically classifies AI used in education as “high-risk,” meaning it comes with a whole set of additional obligations. Globally, regulators are paying very close attention to how AI interacts with children’s information.

“But I’m just trying to save time.”

I get it. Teachers are stretched impossibly thin. Report writing takes hours. AI can do in seconds what used to take an entire weekend. The temptation is real, and the frustration behind it is completely valid.

The issue isn’t that teachers want to use AI. The issue is that most teachers haven’t been told what’s safe and what isn’t. Nobody sat them down and said: “Here’s how to get the benefit of AI without putting your learners’ privacy at risk.”

That’s not the teacher’s fault. That’s a leadership gap.

What schools can do right now:

The good news is that you don’t need to ban AI or pretend it doesn’t exist. You just need some sensible guardrails. Here are a few practical steps any school can take today:

Know what’s being used.

Start by simply asking your staff: “Are you using any AI tools in your work?” You might be surprised by the answers. You can’t manage what you don’t know about.

Set a clear policy.

It doesn’t need to be complicated. A simple rule like “no learner names or personal details in AI tools” gives teachers a boundary while still allowing them to use AI for lesson planning, general content creation, and other non-sensitive tasks.

Anonymise before you paste.

If a teacher wants AI help with report comments, they can remove names, ID numbers, and any identifying details first. “Learner A scored 78% in Maths and shows strong problem-solving skills”, gives the AI enough to work with without exposing anyone’s identity.

Use school-approved tools.

Some EdTech providers are building AI features that keep data within controlled environments, where there are proper contracts, data stays in the right jurisdiction, and privacy protections are built in by design. If your school management platform offers AI features, that’s almost always a safer option than a consumer tool like the free version of ChatGPT.

Talk about it.

The worst thing you can do is say nothing. If teachers are already using AI, and many are, silence from leadership means they’ll keep doing it without any guidance. A 10-minute conversation at your next staff meeting could prevent a serious privacy incident.

This isn’t about fear. It’s about being intentional.

AI in education is not a trend that’s going to pass. It’s going to become as normal as using a projector or a spreadsheet. And that’s a good thing, when it’s done thoughtfully.

But right now we’re in the messy middle. The tools are ahead of the policies. The enthusiasm is ahead of the understanding. And children’s data is ending up in places it was never meant to go, not because anyone has bad intentions, but because nobody told them to think twice.

Schools have a duty of care that extends beyond the classroom. It extends to how their learners’ information is handled in every system, every tool, and every AI prompt.

The conversation doesn’t start with technology. It starts with trust. Parents trust schools with their children, all of them, including their data. Let’s make sure that trust is well placed.

Patric Trollope

Chief Technology Officer

d6

Scroll to Top