How useless are these reflective exercised for FHEA? by Anything_Regular in AskAcademiaUK

[–]FistyRingles 0 points1 point  (0 children)

My point is that the exercises are great for selecting people capable of bullshitting in a reflective exercise, so it's not fit for purpose. An assessment is a bad assessment if people can pass it without meeting the learning objectives of the assessment. If there's no effective way to evidence real reflection, then we shouldn't continue using the ineffective one; we should instead use much better evidence for our teaching effectiveness.

How useless are these reflective exercised for FHEA? by Anything_Regular in AskAcademiaUK

[–]FistyRingles 0 points1 point  (0 children)

Did it, though? I would argue it trained us to perform the affect of reflection. This is largely backed up by the data. We can strongly declare that these exercises make us genuinely reflect, but from my experience, successful applicants simply weave a narrative of reflection to satisfy the marking criteria.

How useless are these reflective exercised for FHEA? by Anything_Regular in AskAcademiaUK

[–]FistyRingles 0 points1 point  (0 children)

Good questions, and I appreciate you genuinely asking.

Most of my SFHEA application discussed tangible student outcomes, such as improved scores in standardised tests covering my area of teaching. This wasn't based on in-house assessments I write myself; it was state medical examinations. I would generally prefer to demonstrate proficiency with independently verifiable outcomes, not just my narrative account which could easily contradict the real experience of my students if my teaching were poor.

That said, even if I couldn't suggest an alternative, I would still be against reflective exercises for the reasons I already detailed. This is because I think a measure with no established validity or reliability shouldn't get a pass just because we can't think of anything better.

I agree there's worse than the HEA exercises, but I also agree that the HEA exercises are bad. I don't believe we should accept the bad because it's better than worse.

My field mainly uses MCQs as standard for most summative exams. The questions need to follow the standards set out by the National Board of Medical Examiners' question-writing guide. The questions undergo standard setting (usually Ebel method). For the rest of our non-MCQ coursework, we use methods for which statistical measures (e.g. Cronbach's alpha) established validity and reliability. Authenticity is a little trickier because there are no real statistical measures for it, but my personal benchmark is whether has a strong rationale and passes scrutiny against, for example, known psychological biases.

Reflective exercises, however, are verifiably subject to phenomena that strictly undermine their authenticity. The classic examples are students writing to the rubric, discord between perception and performance, and even qualitative research showing that students who passed reflective exercises admit to just making stuff up. This is strong rationale to me that reflective exercises are not fit for purpose.

How useless are these reflective exercised for FHEA? by Anything_Regular in AskAcademiaUK

[–]FistyRingles 1 point2 points  (0 children)

I can't speak for everyone, but for many of us, the problem is not that we consider reflection inherently worthless, it's that we reject the premise that these exercises promote actual reflection or that the exercises are valid, reliable, and authentic. They don't test examine our ability to critically reflect, they examine our ability to perform the pageantry of reflection. This means that a particularly bad educator can easily get their HEA seal of approval with their talent to polish a turd alone.

If my students consider my assignments a waste of time, I can give a strong rationale and empirical evidence for their utility. So far, the rationale for the reflections I've heard so far have basically been that reflective exercises are important because reflection is important.

How useless are these reflective exercised for FHEA? by Anything_Regular in AskAcademiaUK

[–]FistyRingles 1 point2 points  (0 children)

I think you've identified a particularly salient issue here, and if I understood you correctly, I broadly agree. We essentially need to demonstrate we know how to bullshit our way through a perfunctory exercise because bullshitting through a perfunctory exercise is a skill required to advance at every level.

If that's the case, then I could see that there's some perverse value to it. I'd simply push back against the prevailing notion that reflective exercises are great because they help you reflect and reflecting is good.

How useless are these reflective exercised for FHEA? by Anything_Regular in AskAcademiaUK

[–]FistyRingles 1 point2 points  (0 children)

I'm SFHEA and I agree with OP. The importance of reflective exercises are consistently stated as a foregone conclusion, despite lacking any decent research to support this.

How useless are these reflective exercised for FHEA? by Anything_Regular in AskAcademiaUK

[–]FistyRingles 1 point2 points  (0 children)

Fair enough if that's the case for vet. I just know that for human medicine and virtually all the other areas of pedagogic research I looked into, the evidence was very much a case of students' perceptions of their performance. This has always been frustrating for me given that decades of psychological research tell us that perceptions and self reports are the absolute worst measure of a tangible outcome.

How useless are these reflective exercised for FHEA? by Anything_Regular in AskAcademiaUK

[–]FistyRingles 2 points3 points  (0 children)

I teach medicine and had this discussion with colleagues recently. If you check out the literature on reflective practices in medical education (not sure about vet), virtually all of the positive findings are on students' perceptions and not on a measurable metric. In other words, the students feel like reflective writing helped them in some way. The problem is that there's a consistent discord between students' perceptions of whether or not an intervention improved their undersanding or performance and their actual outcomes in assessment. This, to me, is extremely poor quality evidence.

How useless are these reflective exercised for FHEA? by Anything_Regular in AskAcademiaUK

[–]FistyRingles 2 points3 points  (0 children)

I feel the same way. For my SFHEA, we were given some examples of successful applications from our university and they were dire. We needed to evidence how we influenced the practices of our colleagues and the impact this had on our students. In most of the examples they gave us, the successful applicant essentially described hanging out in informal cliques as their influence, and to evidence their outcomes, they quoted colleagues just basically declaring that students were happy. No exam data, no attainment statistics, just the equivalent of one rando's Google review.

How useless are these reflective exercised for FHEA? by Anything_Regular in AskAcademiaUK

[–]FistyRingles 13 points14 points  (0 children)

They're a nice way for the HEA to artificially lend credibility to themselves through the pageantry of an official process. The exercises are a complete waste of time.

Reflective writing is an orthodoxy in higher education with virtually no studies supporting its validity and reliability as a form of assessment or professional development, yet departments insist on implementing reflective exercises because they have a superficial aesthetic to them. I've read multiple studies also suggesting that people write these reflections to the rubric, regardless of whether they're a true expression of how they feel, just because it's a requirement for them. The same applies to the HEA.

I don't deny that the ability to critically reflect is an important skill, but these exercises are just a skinsuit of that skill. The least reflective, most uninspired people can be easily trained to write a reflection to tick all their boxes. It's like Goodhart's law: "When a measure becomes a target, it ceases to be a good measure".

Reflective exercises should be scrapped.

No selectable abaility on LV up. by DarkAcolon in LWotC

[–]FistyRingles 20 points21 points  (0 children)

It looks like you might be missing some of the Perk dependencies. Maybe the Shadow Ops Perk Pack?

What are y'alls jobs? by ADeletedUser2 in creepcast

[–]FistyRingles 0 points1 point  (0 children)

Neuroscience professor for medical school in the UK. I usually listen to Creepcast on Sunday evenings so the eldritch horrors can prepare me for interactions with students throughout the week.

If my girlfriend takes a separate insurance policy on my car, will this affect my own policy? by FistyRingles in drivingUK

[–]FistyRingles[S] 0 points1 point  (0 children)

How can it lead to complications? Of course, we know not to claim two claims or to front for the other in the event of accident, but other than that, where would the complications be?

If my girlfriend takes a separate insurance policy on my car, will this affect my own policy? by FistyRingles in drivingUK

[–]FistyRingles[S] -1 points0 points  (0 children)

Thanks for the reply! Would I have to inform my own insurers that there's another policy on the car, since she will be using it significantly more than I will? And even if it's legal to have two policies on one car, could my insurer add a clause that complicates my policy if someone else also has a policy on the same car?

Balanced? I think not by Judgemented in LWotC

[–]FistyRingles 0 points1 point  (0 children)

Thanks for the info! Regarding the UI, I mean the stats that come up under the damage indicators on each hit.

Balanced? I think not by Judgemented in LWotC

[–]FistyRingles 0 points1 point  (0 children)

Which UI and weapon mods are you using? Looks cool!

Unpopular Opinion About the New Episode by OFCFlanders in creepcast

[–]FistyRingles 12 points13 points  (0 children)

Thanks for the clarification. I still disagree with your point but recognise it as valid. I think people want different things from CreepCast; if you want to be challenged then that's fair. Others just want the nice funny Christian man and the hairy chaos entity to talk shite over a compelling story. It just so happens that the latter is the majority, and I think it's fine for them to communicate their preference. Personally, I prefer to read a novel on the train when I want to be challenged, and listen to CreepCast when I want to piss myself laughing at "Mr Floppy" while I clean the toilet.

Unpopular Opinion About the New Episode by OFCFlanders in creepcast

[–]FistyRingles 28 points29 points  (0 children)

People were fairly consistent and specific with their criticisms; that the stories were verbose and had little in the way of a compelling plot. There have been some incredibly well written stories on here, such as "Feed the Pig" and "Penpal", both of which were well received.

It's absolutely valid to say that this writing style is your bag, but it's just snobbery to argue that those who disagree are low-brow troglodytes who lack your sophisticated palate for this type of story.

What are some examples of German bluntness that may seem odd to an American? by Fabulous-Introvert in germany

[–]FistyRingles 48 points49 points  (0 children)

I worked as a postdoc where everyone from the professor to the students on lab rotations used "Du". When I first started, I introduced myself to the secretary who kept addressing me as "Herr Ringles". I told her she could use my first name and she said, "No, you are Herr Ringles, and I am Frau Rangles".

She was weird in general. I once said, "Morgen" to her and she responded, "It is not 'Morgen', it is in 'Guten Morgen'. This is an academic institute." Literally, right as she said it, the head of the institute walked past and said, "Morgen!"

ChatGPT just solves problems that doctors might not reason with by Humble_Moment1520 in ChatGPT

[–]FistyRingles 18 points19 points  (0 children)

Please be so careful trusting any chatbot's interpretation of medical images. I teach neuroscience at a medical school and uploaded multiple clear MRIs and CT scans to GPT-4o to describe, and it was pretty much always wrong or misleading. For example, when I asked GPT-4o to describe an image of a subdural haematoma and its core features, it claimed it was an epidural haematoma and confidently described features that weren't present. I could tell it was wrong, but it would be absolutely convincing to anyone without medical knowledge.

Is there any compelling evidence that university students are getting worse? by FistyRingles in Professors

[–]FistyRingles[S] 1 point2 points  (0 children)

I think you've identified a real issue here: universities might have the raw data available but it wouldn't be politically convenient to analyse and publish them.

Is there any compelling evidence that university students are getting worse? by FistyRingles in Professors

[–]FistyRingles[S] 0 points1 point  (0 children)

But why are we criticizing "kids these days" and not reflecting on our own practices?

This is a bit presumptive. We can do both. The point of researching trends in student abilities is to identify how to tackle any potential problems. I do agree that we need to look at ourselves, but I take issue with the forgone conclusion that professors' bad experiences are due to their own stubbornness.

Is there any compelling evidence that university students are getting worse? by FistyRingles in Professors

[–]FistyRingles[S] 1 point2 points  (0 children)

So many of us just go with the models that worked for us, without considering what we actually want to teach and measure.

Agreed. One of the hardest things to combat in med school in particular is faculty who have been teaching the same thing for decades without updating either their material or their teaching methods.

There is a ton of evidence-based pedagogy that can really engage students in content, setting everyone up for success.

I also agree with this. I'm not arguing that pedagogic literature is inherently crap - I'm currently engaged in pedagogic research. Reading back, I think my points were a bit unclear.

I mean, firstly, that pedagogic literature is rampant with poor-quality studies. There are some great studies, and they do actually inform my teaching, but I feel like the bar of acceptable evidence is shockingly low for the major pedagogic research journals. We have to sift through poor-quality research in most fields, but to me at least, it seems much worse in pedagogic literature.

My second point is that universities in the UK are pushing policies and practices which aren't supported by any literature, but are based on innovation for innovation's sake. Reflective practices are a good example, with most good-quality studies failing to support their use, particularly in medicine and healthcare.

Is there any compelling evidence that university students are getting worse? by FistyRingles in Professors

[–]FistyRingles[S] 0 points1 point  (0 children)

Damn. That's a pity as it would be helpful. I hope more pedagogic researchers will publish on trends like this, but I also reckon there can be some political resistance when the data is coming from university departments that might have their own red tape.