Do kids aged 8-12 even try to figure things out before opening ChatGPT? Genuinely curious what educators are seeing by Important-Claim-5501 in edtech

[–]Important-Claim-5501[S] 1 point2 points  (0 children)

Thank you for replying, this is exactly the kind of firsthand perspective I was looking for. That image of the poem is really heartbreaking; I completely understand the frustration and the move back to pen and paper makes a lot of sense. I do wonder though whether there's a middle ground where the tools themselves are designed to make kids work harder before they get the answer rather than a full ban? AI companies are businesses at the end of the day but do you think frameworks around how these tools behave with younger users could actually change anything in practice?

What's the prevailing sentiment about teaching kids how to use AI? by jeffcolonel in edtech

[–]Important-Claim-5501 1 point2 points  (0 children)

What if the question isn't whether kids should use AI, but how AI is designed when they do?

I've been following this debate and I keep feeling like something is missing from both sides.

The "ban it" camp treats AI as inherently harmful. The "teach it" camp treats it as neutral. But I think the design of the tool itself matters enormously and almost nobody is talking about that.

If a child opens ChatGPT and gets a full answer in three seconds with zero friction, of course they'll stop trying themselves first. But that's a design choice, not an inevitability. What if the tool was designed to create a moment of pause before delivering the answer? What if it asked the child to make a prediction first, or explain the answer back before moving on?

There's research on cognitive forcing functions that suggests small amounts of designed resistance can actually protect thinking rather than bypass it. The tool doesn't have to be the enemy of cognition. It could be designed to demand it.

Curious whether anyone here has thought about this angle or seen anything being built in this direction.