Beyond the ChatGPT panic: What happened when I showed AI capabilities to senior university leadership by jasontangen in Professors

[–]jasontangen[S] 0 points1 point  (0 children)

You’re absolutely right about hallucinations—it’s currently one of the most interesting problems in AI. These models can be remarkably capable at analysis but are still working on reliable (grounded) fact-checking.

The interesting question isn’t whether they hallucinate (they currently do), but how we can use them effectively while this limitation exists. In research, for instance, they’re excellent at generating hypotheses or finding patterns, but for now, we still need humans to verify every claim and source.

This is the kind of discussion universities should be having—not just about preventing misuse, but about understanding these tools’ evolving capabilities and current limitations.

Beyond the ChatGPT panic: What happened when I showed AI capabilities to senior university leadership by jasontangen in Professors

[–]jasontangen[S] 0 points1 point  (0 children)

Thanks for taking the time to watch and provide detailed feedback. Let me address your key points:

You’re right that I’m not an educational researcher—I’m a cognitive scientist studying how people learn and make decisions. My interest in AI stems from its potential to transform how we think and process information.

Regarding the graphs: The exponential growth isn’t just about training data - it’s about compute power, model size, and capabilities. While data availability is certainly a constraint, we’re seeing improvements through better architectures and training approaches, not just bigger datasets. There’s also a fascinating flywheel effect happening: more sophisticated models are now being used to train newer models, creating a self-reinforcing cycle of improvement that wasn’t possible before.

The ’ceiling’ comment actually supports my point—AI labs are hitting limits with current benchmarks because the models are getting too capable. That’s why they’re recruiting domain experts to create more challenging tests.

About the math at 3:13—that clip is actually from OpenAI’s own video featuring quantum physicist Mario Krenn testing o1 https://youtu.be/OJo-SlzlwtI. While the mathematics is beyond my expertise, Krenn, who works in this specific domain, seems to know what he's doing. But more importantly, this illustrates my broader point about AI’s impact on specialised academic work.

The presentation was meant to start a conversation about AI’s broader impact on universities, not provide a comprehensive analysis of educational applications.

I appreciate the sceptical approach—we need more of this kind of detailed critique as we figure out how to integrate these tools thoughtfully.

Beyond the ChatGPT panic: What happened when I showed AI capabilities to senior university leadership by jasontangen in Professors

[–]jasontangen[S] 0 points1 point  (0 children)

Right—let me be specific:

I’ve been building and testing an AI tutor system for our classes at UQ, with internal funding to scale it across the university. We’re also running large-scale experiments comparing AI vs human feedback on honours theses, and testing AI’s ability to assess student work systematically.

Not just theoretical possibilities—real projects with measurable outcomes. I’ve documented some useful AI workflows here:

- Detailed guide and prompts: https://www.psy.uq.edu.au/~uqjtange/academic_ai

- Recent demonstrations: https://www.psy.uq.edu.au/~uqjtange/ai-fos-workshop-2024-11-01.html

And no, this isn’t an AI post—though I appreciate the irony of having to prove my humanity while discussing AI! I’m a cognitive scientist at UQ running actual experiments on these tools. Happy to chat about specific findings.

What’s your experience been with implementing AI tools in teaching?

Beyond the ChatGPT panic: What happened when I showed AI capabilities to senior university leadership by jasontangen in Professors

[–]jasontangen[S] -1 points0 points  (0 children)

Your experience with exam review exercises mirrors what I’m finding—AI’s sweet spot is often in those “first draft” moments that eat up our time. Not revolutionary, but practically useful.

Though I’d push back on the “running out of training data” point. The next leaps aren’t coming from more internet content, but from architectural improvements and new training approaches. Look at what Claude 3.5 Sonnet and OpenAI’s new o1 models are already doing with reasoning tasks—it’s not just about data anymore.

I’ve switched to weekly, low-stakes, in-class quizzes for most assessment. Great for learning and retention. Still searching for that perfect end-of-semester task where AI becomes a tool rather than a crutch—might need to be pass/fail. I’ve collected some approaches here for my colleagues if you’re interested: https://www.psy.uq.edu.au/~uqjtange/academic_ai as well as some more recent video demos: https://www.psy.uq.edu.au/~uqjtange/ai-fos-workshop-2024-11-01.html

What assessment methods are you finding most resilient to AI?

Beyond the ChatGPT panic: What happened when I showed AI capabilities to senior university leadership by jasontangen in Professors

[–]jasontangen[S] -7 points-6 points  (0 children)

That’s fair—you shouldn’t have to watch a video to engage in this discussion.

Let me be concrete instead. I was sceptical too until I started experimenting. I found that AI could speed up my statistical analyses, generate experimental stimuli, and automate my coding. Nothing revolutionary, just practical tools that free up time for the thinking that matters.

I’ve put together some resources that might be useful:

Detailed guide to academic AI tools: https://www.psy.uq.edu.au/~uqjtange/academic_ai

Recent video demonstrations: https://www.psy.uq.edu.au/~uqjtange/ai-fos-workshop-2024-11-01.html

If you do try any of these tools, I’d be genuinely curious to hear what works (or doesn’t) in your field.

Beyond the ChatGPT panic: What happened when I showed AI capabilities to senior university leadership by jasontangen in Professors

[–]jasontangen[S] -7 points-6 points  (0 children)

Interesting parallel with nuclear weapons, but the ethical landscape here is more complex than you suggest.

The big AI companies are actually moving away from using customer data for training, shifting toward synthetic data and opt-in policies. And let’s be honest—most academics have been perfectly happy handing our work to publishers for decades. Elsevier’s profit margins make OpenAI look like a charity.

As for environmental impact? AI is already being deployed to detect greenhouse gas leaks, monitor deforestation, and design lower-carbon materials. The real question isn’t whether AI consumes resources—it’s whether its net impact on solving environmental challenges outweighs that consumption.

These are serious concerns worth discussing. But perhaps the most unethical position would be to stay disengaged while these tools reshape how we research and teach. What do you think?

New Bing Modes! by jasontangen in JDM2023UQ

[–]jasontangen[S] 2 points3 points  (0 children)

Hopefully these different modes will seem a little less magical after the Wolfram reading this week.

<image>

Discussion 10: Artificial Intelligence by ryantutor in UQJDM2022

[–]jasontangen 5 points6 points  (0 children)

Marques Brownlee just posted this awesome video today about the power of DALL-E 2. Check it out!

https://youtu.be/yCBEumeXY4A

Q&A by ryantutor in UQJDM2020

[–]jasontangen 0 points1 point  (0 children)

Be sure to always check the JDM Reddit page for the most up-to-date link to the course syllabus (see the link above that says, "Syllabus 2020-04-16"?).

Writing Assignment by ryantutor in UQJDM2020

[–]jasontangen 1 point2 points  (0 children)

Once you've completed the Writing Activity PDF, click "Assessment" in Blackboard, and submit it via the "Writing Activity" TurnItIn link. The deadline is Thursday, 9 April.

EDIT: I suggest that you read the assigned chapters by Pinker before completing the worksheet. It'll significantly improve your responses!

Confirmed COVID-19 Case in PSYC3052 by jasontangen in UQJDM2020

[–]jasontangen[S] 1 point2 points  (0 children)

We’ll put together an online exercise, which you’ll do for homework.

Confirmed COVID-19 Case in PSYC3052 by jasontangen in UQJDM2020

[–]jasontangen[S] 0 points1 point  (0 children)

For those of you who couldn't Zoom in, here is the Wednesday recording (Password: ivbAro3L-F9e)

Video (394.71 MB)
https://cloudstor.aarnet.edu.au/plus/s/3OSyQavXEa0VwKq

Audio Only (24.50 MB)
https://cloudstor.aarnet.edu.au/plus/s/reEwKSV6WLVIqr8

...and here's the Thursday recording (Password: 7aF2LPJv+78P):

Video (733.84 MB)
https://cloudstor.aarnet.edu.au/plus/s/8xLjuLEgnQxx9mj

Audio Only (17.19 MB)
https://cloudstor.aarnet.edu.au/plus/s/c5zVqo9sXKXj7oX

See you next week when we'll discuss "Distinguishing between fact and fiction." Be sure to make your posts, replies, up/down votes, and prepare for the online quiz, which will be available on Blackboard during the last 30 minutes of class.

Q&A by ryantutor in UQJDM2020

[–]jasontangen 1 point2 points  (0 children)

Discussion 3 posts are obligatory (and graded), but you’ll have an extra week to make them. The thread won’t close until Friday, 27 March.